2025-09-23 06:50:21.454562 | Job console starting 2025-09-23 06:50:21.467398 | Updating git repos 2025-09-23 06:50:21.547689 | Cloning repos into workspace 2025-09-23 06:50:21.775060 | Restoring repo states 2025-09-23 06:50:21.822004 | Merging changes 2025-09-23 06:50:21.822030 | Checking out repos 2025-09-23 06:50:22.124567 | Preparing playbooks 2025-09-23 06:50:22.819281 | Running Ansible setup 2025-09-23 06:50:27.160180 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-09-23 06:50:27.907266 | 2025-09-23 06:50:27.907425 | PLAY [Base pre] 2025-09-23 06:50:27.925692 | 2025-09-23 06:50:27.925821 | TASK [Setup log path fact] 2025-09-23 06:50:27.955843 | orchestrator | ok 2025-09-23 06:50:27.974241 | 2025-09-23 06:50:27.974383 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-23 06:50:28.019346 | orchestrator | ok 2025-09-23 06:50:28.034673 | 2025-09-23 06:50:28.034799 | TASK [emit-job-header : Print job information] 2025-09-23 06:50:28.080138 | # Job Information 2025-09-23 06:50:28.080354 | Ansible Version: 2.16.14 2025-09-23 06:50:28.080405 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-09-23 06:50:28.080454 | Pipeline: post 2025-09-23 06:50:28.080504 | Executor: 521e9411259a 2025-09-23 06:50:28.080536 | Triggered by: https://github.com/osism/testbed/commit/0ba055c0e87686a63510391bb83663ed3324904b 2025-09-23 06:50:28.080569 | Event ID: 8fe02898-9849-11f0-883a-68ac9ed8558b 2025-09-23 06:50:28.090140 | 2025-09-23 06:50:28.090266 | LOOP [emit-job-header : Print node information] 2025-09-23 06:50:28.221673 | orchestrator | ok: 2025-09-23 06:50:28.221961 | orchestrator | # Node Information 2025-09-23 06:50:28.221998 | orchestrator | Inventory Hostname: orchestrator 2025-09-23 06:50:28.222023 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-09-23 06:50:28.222045 | orchestrator | Username: zuul-testbed06 2025-09-23 06:50:28.222066 | orchestrator | Distro: Debian 12.12 2025-09-23 06:50:28.222089 | orchestrator | Provider: static-testbed 2025-09-23 06:50:28.222111 | orchestrator | Region: 2025-09-23 06:50:28.222131 | orchestrator | Label: testbed-orchestrator 2025-09-23 06:50:28.222151 | orchestrator | Product Name: OpenStack Nova 2025-09-23 06:50:28.222170 | orchestrator | Interface IP: 81.163.193.140 2025-09-23 06:50:28.243523 | 2025-09-23 06:50:28.243708 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-09-23 06:50:28.728228 | orchestrator -> localhost | changed 2025-09-23 06:50:28.740988 | 2025-09-23 06:50:28.741141 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-09-23 06:50:29.848434 | orchestrator -> localhost | changed 2025-09-23 06:50:29.863947 | 2025-09-23 06:50:29.864097 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-09-23 06:50:30.153518 | orchestrator -> localhost | ok 2025-09-23 06:50:30.161984 | 2025-09-23 06:50:30.162116 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-09-23 06:50:30.192302 | orchestrator | ok 2025-09-23 06:50:30.209265 | orchestrator | included: /var/lib/zuul/builds/cb22b4db87e44be8827bfb43641a1067/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-09-23 06:50:30.217641 | 2025-09-23 06:50:30.217742 | TASK [add-build-sshkey : Create Temp SSH key] 2025-09-23 06:50:31.897881 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-09-23 06:50:31.898145 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/cb22b4db87e44be8827bfb43641a1067/work/cb22b4db87e44be8827bfb43641a1067_id_rsa 2025-09-23 06:50:31.898188 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/cb22b4db87e44be8827bfb43641a1067/work/cb22b4db87e44be8827bfb43641a1067_id_rsa.pub 2025-09-23 06:50:31.898216 | orchestrator -> localhost | The key fingerprint is: 2025-09-23 06:50:31.898245 | orchestrator -> localhost | SHA256:eY41tnSgRaKhhE1GIOh4kezSZRhvw7bfuOvkNz8WJsI zuul-build-sshkey 2025-09-23 06:50:31.898268 | orchestrator -> localhost | The key's randomart image is: 2025-09-23 06:50:31.898301 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-09-23 06:50:31.898324 | orchestrator -> localhost | |o.oX= . . . | 2025-09-23 06:50:31.898346 | orchestrator -> localhost | |..*=+. o o | 2025-09-23 06:50:31.898366 | orchestrator -> localhost | |oo +B . o | 2025-09-23 06:50:31.898387 | orchestrator -> localhost | |o.+o o + . | 2025-09-23 06:50:31.898408 | orchestrator -> localhost | | o .. S * . | 2025-09-23 06:50:31.898438 | orchestrator -> localhost | | .Eo.Bo+ | 2025-09-23 06:50:31.898459 | orchestrator -> localhost | | +.ooo. | 2025-09-23 06:50:31.898495 | orchestrator -> localhost | | o .o o | 2025-09-23 06:50:31.898518 | orchestrator -> localhost | | .=o +.. | 2025-09-23 06:50:31.898539 | orchestrator -> localhost | +----[SHA256]-----+ 2025-09-23 06:50:31.898598 | orchestrator -> localhost | ok: Runtime: 0:00:01.154902 2025-09-23 06:50:31.906430 | 2025-09-23 06:50:31.906558 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-09-23 06:50:31.937554 | orchestrator | ok 2025-09-23 06:50:31.948202 | orchestrator | included: /var/lib/zuul/builds/cb22b4db87e44be8827bfb43641a1067/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-09-23 06:50:31.961747 | 2025-09-23 06:50:31.961859 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-09-23 06:50:31.995652 | orchestrator | skipping: Conditional result was False 2025-09-23 06:50:32.009444 | 2025-09-23 06:50:32.009628 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-09-23 06:50:32.610221 | orchestrator | changed 2025-09-23 06:50:32.616933 | 2025-09-23 06:50:32.617041 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-09-23 06:50:32.916725 | orchestrator | ok 2025-09-23 06:50:32.923464 | 2025-09-23 06:50:32.923609 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-09-23 06:50:33.341695 | orchestrator | ok 2025-09-23 06:50:33.347954 | 2025-09-23 06:50:33.348073 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-09-23 06:50:33.775296 | orchestrator | ok 2025-09-23 06:50:33.781494 | 2025-09-23 06:50:33.781594 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-09-23 06:50:33.805115 | orchestrator | skipping: Conditional result was False 2025-09-23 06:50:33.812407 | 2025-09-23 06:50:33.812561 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-09-23 06:50:34.294687 | orchestrator -> localhost | changed 2025-09-23 06:50:34.308839 | 2025-09-23 06:50:34.308961 | TASK [add-build-sshkey : Add back temp key] 2025-09-23 06:50:34.640949 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/cb22b4db87e44be8827bfb43641a1067/work/cb22b4db87e44be8827bfb43641a1067_id_rsa (zuul-build-sshkey) 2025-09-23 06:50:34.641348 | orchestrator -> localhost | ok: Runtime: 0:00:00.010643 2025-09-23 06:50:34.652445 | 2025-09-23 06:50:34.652608 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-09-23 06:50:35.085858 | orchestrator | ok 2025-09-23 06:50:35.093463 | 2025-09-23 06:50:35.093600 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-09-23 06:50:35.128001 | orchestrator | skipping: Conditional result was False 2025-09-23 06:50:35.176072 | 2025-09-23 06:50:35.176197 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-09-23 06:50:35.575718 | orchestrator | ok 2025-09-23 06:50:35.590354 | 2025-09-23 06:50:35.590533 | TASK [validate-host : Define zuul_info_dir fact] 2025-09-23 06:50:35.620868 | orchestrator | ok 2025-09-23 06:50:35.628337 | 2025-09-23 06:50:35.633137 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-09-23 06:50:36.064199 | orchestrator -> localhost | ok 2025-09-23 06:50:36.074616 | 2025-09-23 06:50:36.074770 | TASK [validate-host : Collect information about the host] 2025-09-23 06:50:37.232606 | orchestrator | ok 2025-09-23 06:50:37.247528 | 2025-09-23 06:50:37.247650 | TASK [validate-host : Sanitize hostname] 2025-09-23 06:50:37.308127 | orchestrator | ok 2025-09-23 06:50:37.314350 | 2025-09-23 06:50:37.314508 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-09-23 06:50:38.155349 | orchestrator -> localhost | changed 2025-09-23 06:50:38.163672 | 2025-09-23 06:50:38.163849 | TASK [validate-host : Collect information about zuul worker] 2025-09-23 06:50:38.641105 | orchestrator | ok 2025-09-23 06:50:38.646686 | 2025-09-23 06:50:38.646804 | TASK [validate-host : Write out all zuul information for each host] 2025-09-23 06:50:39.191033 | orchestrator -> localhost | changed 2025-09-23 06:50:39.202097 | 2025-09-23 06:50:39.202231 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-09-23 06:50:39.473677 | orchestrator | ok 2025-09-23 06:50:39.482887 | 2025-09-23 06:50:39.483010 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-09-23 06:51:14.945611 | orchestrator | changed: 2025-09-23 06:51:14.945783 | orchestrator | .d..t...... src/ 2025-09-23 06:51:14.945819 | orchestrator | .d..t...... src/github.com/ 2025-09-23 06:51:14.945844 | orchestrator | .d..t...... src/github.com/osism/ 2025-09-23 06:51:14.945867 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-09-23 06:51:14.945888 | orchestrator | RedHat.yml 2025-09-23 06:51:14.958934 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-09-23 06:51:14.958952 | orchestrator | RedHat.yml 2025-09-23 06:51:14.959037 | orchestrator | = 2.2.0"... 2025-09-23 06:51:27.572524 | orchestrator | 06:51:27.572 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-09-23 06:51:27.598564 | orchestrator | 06:51:27.598 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-09-23 06:51:27.753404 | orchestrator | 06:51:27.753 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-09-23 06:51:28.230833 | orchestrator | 06:51:28.230 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-09-23 06:51:28.300903 | orchestrator | 06:51:28.300 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-09-23 06:51:28.789603 | orchestrator | 06:51:28.789 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-09-23 06:51:28.860051 | orchestrator | 06:51:28.859 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-09-23 06:51:29.532914 | orchestrator | 06:51:29.532 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-09-23 06:51:29.533026 | orchestrator | 06:51:29.532 STDOUT terraform: Providers are signed by their developers. 2025-09-23 06:51:29.533042 | orchestrator | 06:51:29.532 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-09-23 06:51:29.533054 | orchestrator | 06:51:29.532 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-09-23 06:51:29.533128 | orchestrator | 06:51:29.533 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-09-23 06:51:29.533156 | orchestrator | 06:51:29.533 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-09-23 06:51:29.533303 | orchestrator | 06:51:29.533 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-09-23 06:51:29.533319 | orchestrator | 06:51:29.533 STDOUT terraform: you run "tofu init" in the future. 2025-09-23 06:51:29.533799 | orchestrator | 06:51:29.533 STDOUT terraform: OpenTofu has been successfully initialized! 2025-09-23 06:51:29.533935 | orchestrator | 06:51:29.533 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-09-23 06:51:29.533949 | orchestrator | 06:51:29.533 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-09-23 06:51:29.533964 | orchestrator | 06:51:29.533 STDOUT terraform: should now work. 2025-09-23 06:51:29.533978 | orchestrator | 06:51:29.533 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-09-23 06:51:29.534067 | orchestrator | 06:51:29.533 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-09-23 06:51:29.534107 | orchestrator | 06:51:29.534 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-09-23 06:51:29.696265 | orchestrator | 06:51:29.695 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-09-23 06:51:29.696332 | orchestrator | 06:51:29.695 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-09-23 06:51:29.953042 | orchestrator | 06:51:29.952 STDOUT terraform: Created and switched to workspace "ci"! 2025-09-23 06:51:29.953113 | orchestrator | 06:51:29.952 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-09-23 06:51:29.953126 | orchestrator | 06:51:29.952 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-09-23 06:51:29.953135 | orchestrator | 06:51:29.952 STDOUT terraform: for this configuration. 2025-09-23 06:51:30.101763 | orchestrator | 06:51:30.098 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-09-23 06:51:30.101811 | orchestrator | 06:51:30.098 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-09-23 06:51:30.202649 | orchestrator | 06:51:30.202 STDOUT terraform: ci.auto.tfvars 2025-09-23 06:51:30.202774 | orchestrator | 06:51:30.202 STDOUT terraform: default_custom.tf 2025-09-23 06:51:30.328009 | orchestrator | 06:51:30.327 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-09-23 06:51:31.258120 | orchestrator | 06:51:31.257 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-09-23 06:51:31.784474 | orchestrator | 06:51:31.784 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-09-23 06:51:32.005026 | orchestrator | 06:51:32.004 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-09-23 06:51:32.005101 | orchestrator | 06:51:32.004 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-09-23 06:51:32.005109 | orchestrator | 06:51:32.005 STDOUT terraform:  + create 2025-09-23 06:51:32.005115 | orchestrator | 06:51:32.005 STDOUT terraform:  <= read (data resources) 2025-09-23 06:51:32.005120 | orchestrator | 06:51:32.005 STDOUT terraform: OpenTofu will perform the following actions: 2025-09-23 06:51:32.005126 | orchestrator | 06:51:32.005 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-09-23 06:51:32.005161 | orchestrator | 06:51:32.005 STDOUT terraform:  # (config refers to values not yet known) 2025-09-23 06:51:32.005192 | orchestrator | 06:51:32.005 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-09-23 06:51:32.005223 | orchestrator | 06:51:32.005 STDOUT terraform:  + checksum = (known after apply) 2025-09-23 06:51:32.005250 | orchestrator | 06:51:32.005 STDOUT terraform:  + created_at = (known after apply) 2025-09-23 06:51:32.005289 | orchestrator | 06:51:32.005 STDOUT terraform:  + file = (known after apply) 2025-09-23 06:51:32.005310 | orchestrator | 06:51:32.005 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.005343 | orchestrator | 06:51:32.005 STDOUT terraform:  + metadata = (known after apply) 2025-09-23 06:51:32.005380 | orchestrator | 06:51:32.005 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-23 06:51:32.005400 | orchestrator | 06:51:32.005 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-23 06:51:32.005421 | orchestrator | 06:51:32.005 STDOUT terraform:  + most_recent = true 2025-09-23 06:51:32.005448 | orchestrator | 06:51:32.005 STDOUT terraform:  + name = (known after apply) 2025-09-23 06:51:32.005477 | orchestrator | 06:51:32.005 STDOUT terraform:  + protected = (known after apply) 2025-09-23 06:51:32.005507 | orchestrator | 06:51:32.005 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.005543 | orchestrator | 06:51:32.005 STDOUT terraform:  + schema = (known after apply) 2025-09-23 06:51:32.005562 | orchestrator | 06:51:32.005 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-23 06:51:32.005590 | orchestrator | 06:51:32.005 STDOUT terraform:  + tags = (known after apply) 2025-09-23 06:51:32.005618 | orchestrator | 06:51:32.005 STDOUT terraform:  + updated_at = (known after apply) 2025-09-23 06:51:32.005634 | orchestrator | 06:51:32.005 STDOUT terraform:  } 2025-09-23 06:51:32.005703 | orchestrator | 06:51:32.005 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-09-23 06:51:32.005731 | orchestrator | 06:51:32.005 STDOUT terraform:  # (config refers to values not yet known) 2025-09-23 06:51:32.005765 | orchestrator | 06:51:32.005 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-09-23 06:51:32.005800 | orchestrator | 06:51:32.005 STDOUT terraform:  + checksum = (known after apply) 2025-09-23 06:51:32.005821 | orchestrator | 06:51:32.005 STDOUT terraform:  + created_at = (known after apply) 2025-09-23 06:51:32.005848 | orchestrator | 06:51:32.005 STDOUT terraform:  + file = (known after apply) 2025-09-23 06:51:32.005885 | orchestrator | 06:51:32.005 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.005904 | orchestrator | 06:51:32.005 STDOUT terraform:  + metadata = (known after apply) 2025-09-23 06:51:32.005934 | orchestrator | 06:51:32.005 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-23 06:51:32.005970 | orchestrator | 06:51:32.005 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-23 06:51:32.005992 | orchestrator | 06:51:32.005 STDOUT terraform:  + most_recent = true 2025-09-23 06:51:32.006035 | orchestrator | 06:51:32.005 STDOUT terraform:  + name = (known after apply) 2025-09-23 06:51:32.006058 | orchestrator | 06:51:32.006 STDOUT terraform:  + protected = (known after apply) 2025-09-23 06:51:32.006085 | orchestrator | 06:51:32.006 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.006123 | orchestrator | 06:51:32.006 STDOUT terraform:  + schema = (known after apply) 2025-09-23 06:51:32.006144 | orchestrator | 06:51:32.006 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-23 06:51:32.006173 | orchestrator | 06:51:32.006 STDOUT terraform:  + tags = (known after apply) 2025-09-23 06:51:32.006209 | orchestrator | 06:51:32.006 STDOUT terraform:  + updated_at = (known after apply) 2025-09-23 06:51:32.006216 | orchestrator | 06:51:32.006 STDOUT terraform:  } 2025-09-23 06:51:32.006248 | orchestrator | 06:51:32.006 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-09-23 06:51:32.006278 | orchestrator | 06:51:32.006 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-09-23 06:51:32.006314 | orchestrator | 06:51:32.006 STDOUT terraform:  + content = (known after apply) 2025-09-23 06:51:32.006349 | orchestrator | 06:51:32.006 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-23 06:51:32.006389 | orchestrator | 06:51:32.006 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-23 06:51:32.006423 | orchestrator | 06:51:32.006 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-23 06:51:32.006464 | orchestrator | 06:51:32.006 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-23 06:51:32.006493 | orchestrator | 06:51:32.006 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-23 06:51:32.006531 | orchestrator | 06:51:32.006 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-23 06:51:32.006561 | orchestrator | 06:51:32.006 STDOUT terraform:  + directory_permission = "0777" 2025-09-23 06:51:32.006578 | orchestrator | 06:51:32.006 STDOUT terraform:  + file_permission = "0644" 2025-09-23 06:51:32.006612 | orchestrator | 06:51:32.006 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-09-23 06:51:32.006649 | orchestrator | 06:51:32.006 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.006656 | orchestrator | 06:51:32.006 STDOUT terraform:  } 2025-09-23 06:51:32.006721 | orchestrator | 06:51:32.006 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-09-23 06:51:32.006741 | orchestrator | 06:51:32.006 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-09-23 06:51:32.006777 | orchestrator | 06:51:32.006 STDOUT terraform:  + content = (known after apply) 2025-09-23 06:51:32.006815 | orchestrator | 06:51:32.006 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-23 06:51:32.006848 | orchestrator | 06:51:32.006 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-23 06:51:32.006882 | orchestrator | 06:51:32.006 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-23 06:51:32.006918 | orchestrator | 06:51:32.006 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-23 06:51:32.006960 | orchestrator | 06:51:32.006 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-23 06:51:32.006986 | orchestrator | 06:51:32.006 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-23 06:51:32.007009 | orchestrator | 06:51:32.006 STDOUT terraform:  + directory_permission = "0777" 2025-09-23 06:51:32.007042 | orchestrator | 06:51:32.007 STDOUT terraform:  + file_permission = "0644" 2025-09-23 06:51:32.007067 | orchestrator | 06:51:32.007 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-09-23 06:51:32.007110 | orchestrator | 06:51:32.007 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.007117 | orchestrator | 06:51:32.007 STDOUT terraform:  } 2025-09-23 06:51:32.007138 | orchestrator | 06:51:32.007 STDOUT terraform:  # local_file.inventory will be created 2025-09-23 06:51:32.007170 | orchestrator | 06:51:32.007 STDOUT terraform:  + resource "local_file" "inventory" { 2025-09-23 06:51:32.007215 | orchestrator | 06:51:32.007 STDOUT terraform:  + content = (known after apply) 2025-09-23 06:51:32.007243 | orchestrator | 06:51:32.007 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-23 06:51:32.007285 | orchestrator | 06:51:32.007 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-23 06:51:32.007313 | orchestrator | 06:51:32.007 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-23 06:51:32.007346 | orchestrator | 06:51:32.007 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-23 06:51:32.007380 | orchestrator | 06:51:32.007 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-23 06:51:32.007415 | orchestrator | 06:51:32.007 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-23 06:51:32.007447 | orchestrator | 06:51:32.007 STDOUT terraform:  + directory_permission = "0777" 2025-09-23 06:51:32.007465 | orchestrator | 06:51:32.007 STDOUT terraform:  + file_permission = "0644" 2025-09-23 06:51:32.007494 | orchestrator | 06:51:32.007 STDOUT terraform:  + filename = "inventory.ci" 2025-09-23 06:51:32.007537 | orchestrator | 06:51:32.007 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.007543 | orchestrator | 06:51:32.007 STDOUT terraform:  } 2025-09-23 06:51:32.007566 | orchestrator | 06:51:32.007 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-09-23 06:51:32.007596 | orchestrator | 06:51:32.007 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-09-23 06:51:32.007639 | orchestrator | 06:51:32.007 STDOUT terraform:  + content = (sensitive value) 2025-09-23 06:51:32.007707 | orchestrator | 06:51:32.007 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-23 06:51:32.007727 | orchestrator | 06:51:32.007 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-23 06:51:32.007760 | orchestrator | 06:51:32.007 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-23 06:51:32.007806 | orchestrator | 06:51:32.007 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-23 06:51:32.007832 | orchestrator | 06:51:32.007 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-23 06:51:32.007876 | orchestrator | 06:51:32.007 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-23 06:51:32.007894 | orchestrator | 06:51:32.007 STDOUT terraform:  + directory_permission = "0700" 2025-09-23 06:51:32.007915 | orchestrator | 06:51:32.007 STDOUT terraform:  + file_permission = "0600" 2025-09-23 06:51:32.007946 | orchestrator | 06:51:32.007 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-09-23 06:51:32.007982 | orchestrator | 06:51:32.007 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.007988 | orchestrator | 06:51:32.007 STDOUT terraform:  } 2025-09-23 06:51:32.008019 | orchestrator | 06:51:32.007 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-09-23 06:51:32.008050 | orchestrator | 06:51:32.008 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-09-23 06:51:32.008071 | orchestrator | 06:51:32.008 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.008078 | orchestrator | 06:51:32.008 STDOUT terraform:  } 2025-09-23 06:51:32.008135 | orchestrator | 06:51:32.008 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-09-23 06:51:32.008184 | orchestrator | 06:51:32.008 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-09-23 06:51:32.008216 | orchestrator | 06:51:32.008 STDOUT terraform:  + attachment = (known after apply) 2025-09-23 06:51:32.008241 | orchestrator | 06:51:32.008 STDOUT terraform:  + availability_zone = "nova" 2025-09-23 06:51:32.008282 | orchestrator | 06:51:32.008 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.008310 | orchestrator | 06:51:32.008 STDOUT terraform:  + image_id = (known after apply) 2025-09-23 06:51:32.008345 | orchestrator | 06:51:32.008 STDOUT terraform:  + metadata = (known after apply) 2025-09-23 06:51:32.008389 | orchestrator | 06:51:32.008 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-09-23 06:51:32.008423 | orchestrator | 06:51:32.008 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.008454 | orchestrator | 06:51:32.008 STDOUT terraform:  + size = 80 2025-09-23 06:51:32.008460 | orchestrator | 06:51:32.008 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-23 06:51:32.008485 | orchestrator | 06:51:32.008 STDOUT terraform:  + volume_type = "ssd" 2025-09-23 06:51:32.008493 | orchestrator | 06:51:32.008 STDOUT terraform:  } 2025-09-23 06:51:32.008544 | orchestrator | 06:51:32.008 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-09-23 06:51:32.008588 | orchestrator | 06:51:32.008 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-23 06:51:32.008622 | orchestrator | 06:51:32.008 STDOUT terraform:  + attachment = (known after apply) 2025-09-23 06:51:32.008646 | orchestrator | 06:51:32.008 STDOUT terraform:  + availability_zone = "nova" 2025-09-23 06:51:32.008692 | orchestrator | 06:51:32.008 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.008725 | orchestrator | 06:51:32.008 STDOUT terraform:  + image_id = (known after apply) 2025-09-23 06:51:32.008767 | orchestrator | 06:51:32.008 STDOUT terraform:  + metadata = (known after apply) 2025-09-23 06:51:32.008802 | orchestrator | 06:51:32.008 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-09-23 06:51:32.008850 | orchestrator | 06:51:32.008 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.008857 | orchestrator | 06:51:32.008 STDOUT terraform:  + size = 80 2025-09-23 06:51:32.008877 | orchestrator | 06:51:32.008 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-23 06:51:32.008901 | orchestrator | 06:51:32.008 STDOUT terraform:  + volume_type = "ssd" 2025-09-23 06:51:32.008907 | orchestrator | 06:51:32.008 STDOUT terraform:  } 2025-09-23 06:51:32.008956 | orchestrator | 06:51:32.008 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-09-23 06:51:32.009011 | orchestrator | 06:51:32.008 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-23 06:51:32.009035 | orchestrator | 06:51:32.008 STDOUT terraform:  + attachment = (known after apply) 2025-09-23 06:51:32.009058 | orchestrator | 06:51:32.009 STDOUT terraform:  + availability_zone = "nova" 2025-09-23 06:51:32.009096 | orchestrator | 06:51:32.009 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.009129 | orchestrator | 06:51:32.009 STDOUT terraform:  + image_id = (known after apply) 2025-09-23 06:51:32.009165 | orchestrator | 06:51:32.009 STDOUT terraform:  + metadata = (known after apply) 2025-09-23 06:51:32.009208 | orchestrator | 06:51:32.009 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-09-23 06:51:32.009242 | orchestrator | 06:51:32.009 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.009263 | orchestrator | 06:51:32.009 STDOUT terraform:  + size = 80 2025-09-23 06:51:32.009286 | orchestrator | 06:51:32.009 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-23 06:51:32.009317 | orchestrator | 06:51:32.009 STDOUT terraform:  + volume_type = "ssd" 2025-09-23 06:51:32.009324 | orchestrator | 06:51:32.009 STDOUT terraform:  } 2025-09-23 06:51:32.009366 | orchestrator | 06:51:32.009 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-09-23 06:51:32.009410 | orchestrator | 06:51:32.009 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-23 06:51:32.009445 | orchestrator | 06:51:32.009 STDOUT terraform:  + attachment = (known after apply) 2025-09-23 06:51:32.009471 | orchestrator | 06:51:32.009 STDOUT terraform:  + availability_zone = "nova" 2025-09-23 06:51:32.009503 | orchestrator | 06:51:32.009 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.009538 | orchestrator | 06:51:32.009 STDOUT terraform:  + image_id = (known after apply) 2025-09-23 06:51:32.009574 | orchestrator | 06:51:32.009 STDOUT terraform:  + metadata = (known after apply) 2025-09-23 06:51:32.009617 | orchestrator | 06:51:32.009 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-09-23 06:51:32.009653 | orchestrator | 06:51:32.009 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.009682 | orchestrator | 06:51:32.009 STDOUT terraform:  + size = 80 2025-09-23 06:51:32.009706 | orchestrator | 06:51:32.009 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-23 06:51:32.009730 | orchestrator | 06:51:32.009 STDOUT terraform:  + volume_type = "ssd" 2025-09-23 06:51:32.009736 | orchestrator | 06:51:32.009 STDOUT terraform:  } 2025-09-23 06:51:32.009810 | orchestrator | 06:51:32.009 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-09-23 06:51:32.009862 | orchestrator | 06:51:32.009 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-23 06:51:32.009898 | orchestrator | 06:51:32.009 STDOUT terraform:  + attachment = (known after apply) 2025-09-23 06:51:32.009922 | orchestrator | 06:51:32.009 STDOUT terraform:  + availability_zone = "nova" 2025-09-23 06:51:32.009957 | orchestrator | 06:51:32.009 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.009992 | orchestrator | 06:51:32.009 STDOUT terraform:  + image_id = (known after apply) 2025-09-23 06:51:32.010041 | orchestrator | 06:51:32.009 STDOUT terraform:  + metadata = (known after apply) 2025-09-23 06:51:32.010083 | orchestrator | 06:51:32.010 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-09-23 06:51:32.010117 | orchestrator | 06:51:32.010 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.010137 | orchestrator | 06:51:32.010 STDOUT terraform:  + size = 80 2025-09-23 06:51:32.010160 | orchestrator | 06:51:32.010 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-23 06:51:32.010184 | orchestrator | 06:51:32.010 STDOUT terraform:  + volume_type = "ssd" 2025-09-23 06:51:32.010190 | orchestrator | 06:51:32.010 STDOUT terraform:  } 2025-09-23 06:51:32.010238 | orchestrator | 06:51:32.010 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-09-23 06:51:32.010288 | orchestrator | 06:51:32.010 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-23 06:51:32.010317 | orchestrator | 06:51:32.010 STDOUT terraform:  + attachment = (known after apply) 2025-09-23 06:51:32.010339 | orchestrator | 06:51:32.010 STDOUT terraform:  + availability_zone = "nova" 2025-09-23 06:51:32.010377 | orchestrator | 06:51:32.010 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.010410 | orchestrator | 06:51:32.010 STDOUT terraform:  + image_id = (known after apply) 2025-09-23 06:51:32.010444 | orchestrator | 06:51:32.010 STDOUT terraform:  + metadata = (known after apply) 2025-09-23 06:51:32.010487 | orchestrator | 06:51:32.010 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-09-23 06:51:32.010522 | orchestrator | 06:51:32.010 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.010543 | orchestrator | 06:51:32.010 STDOUT terraform:  + size = 80 2025-09-23 06:51:32.010567 | orchestrator | 06:51:32.010 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-23 06:51:32.010590 | orchestrator | 06:51:32.010 STDOUT terraform:  + volume_type = "ssd" 2025-09-23 06:51:32.010603 | orchestrator | 06:51:32.010 STDOUT terraform:  } 2025-09-23 06:51:32.010654 | orchestrator | 06:51:32.010 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-09-23 06:51:32.010714 | orchestrator | 06:51:32.010 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-23 06:51:32.010747 | orchestrator | 06:51:32.010 STDOUT terraform:  + attachment = (known after apply) 2025-09-23 06:51:32.010770 | orchestrator | 06:51:32.010 STDOUT terraform:  + availability_zone = "nova" 2025-09-23 06:51:32.010805 | orchestrator | 06:51:32.010 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.010840 | orchestrator | 06:51:32.010 STDOUT terraform:  + image_id = (known after apply) 2025-09-23 06:51:32.010874 | orchestrator | 06:51:32.010 STDOUT terraform:  + metadata = (known after apply) 2025-09-23 06:51:32.010917 | orchestrator | 06:51:32.010 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-09-23 06:51:32.010953 | orchestrator | 06:51:32.010 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.010972 | orchestrator | 06:51:32.010 STDOUT terraform:  + size = 80 2025-09-23 06:51:32.010996 | orchestrator | 06:51:32.010 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-23 06:51:32.011019 | orchestrator | 06:51:32.010 STDOUT terraform:  + volume_type = "ssd" 2025-09-23 06:51:32.011025 | orchestrator | 06:51:32.011 STDOUT terraform:  } 2025-09-23 06:51:32.011072 | orchestrator | 06:51:32.011 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-09-23 06:51:32.011113 | orchestrator | 06:51:32.011 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-23 06:51:32.011152 | orchestrator | 06:51:32.011 STDOUT terraform:  + attachment = (known after apply) 2025-09-23 06:51:32.011172 | orchestrator | 06:51:32.011 STDOUT terraform:  + availability_zone = "nova" 2025-09-23 06:51:32.011208 | orchestrator | 06:51:32.011 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.011242 | orchestrator | 06:51:32.011 STDOUT terraform:  + metadata = (known after apply) 2025-09-23 06:51:32.011282 | orchestrator | 06:51:32.011 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-09-23 06:51:32.011317 | orchestrator | 06:51:32.011 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.011339 | orchestrator | 06:51:32.011 STDOUT terraform:  + size = 20 2025-09-23 06:51:32.011364 | orchestrator | 06:51:32.011 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-23 06:51:32.011389 | orchestrator | 06:51:32.011 STDOUT terraform:  + volume_type = "ssd" 2025-09-23 06:51:32.011395 | orchestrator | 06:51:32.011 STDOUT terraform:  } 2025-09-23 06:51:32.011442 | orchestrator | 06:51:32.011 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-09-23 06:51:32.011485 | orchestrator | 06:51:32.011 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-23 06:51:32.011519 | orchestrator | 06:51:32.011 STDOUT terraform:  + attachment = (known after apply) 2025-09-23 06:51:32.011544 | orchestrator | 06:51:32.011 STDOUT terraform:  + availability_zone = "nova" 2025-09-23 06:51:32.011580 | orchestrator | 06:51:32.011 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.011614 | orchestrator | 06:51:32.011 STDOUT terraform:  + metadata = (known after apply) 2025-09-23 06:51:32.011652 | orchestrator | 06:51:32.011 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-09-23 06:51:32.011698 | orchestrator | 06:51:32.011 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.011719 | orchestrator | 06:51:32.011 STDOUT terraform:  + size = 20 2025-09-23 06:51:32.011741 | orchestrator | 06:51:32.011 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-23 06:51:32.011764 | orchestrator | 06:51:32.011 STDOUT terraform:  + volume_type = "ssd" 2025-09-23 06:51:32.011779 | orchestrator | 06:51:32.011 STDOUT terraform:  } 2025-09-23 06:51:32.011823 | orchestrator | 06:51:32.011 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-09-23 06:51:32.011865 | orchestrator | 06:51:32.011 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-23 06:51:32.011900 | orchestrator | 06:51:32.011 STDOUT terraform:  + attachment = (known after apply) 2025-09-23 06:51:32.011922 | orchestrator | 06:51:32.011 STDOUT terraform:  + availability_zone = "nova" 2025-09-23 06:51:32.011957 | orchestrator | 06:51:32.011 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.011992 | orchestrator | 06:51:32.011 STDOUT terraform:  + metadata = (known after apply) 2025-09-23 06:51:32.012028 | orchestrator | 06:51:32.011 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-09-23 06:51:32.012062 | orchestrator | 06:51:32.012 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.012083 | orchestrator | 06:51:32.012 STDOUT terraform:  + size = 20 2025-09-23 06:51:32.012110 | orchestrator | 06:51:32.012 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-23 06:51:32.012130 | orchestrator | 06:51:32.012 STDOUT terraform:  + volume_type = "ssd" 2025-09-23 06:51:32.012137 | orchestrator | 06:51:32.012 STDOUT terraform:  } 2025-09-23 06:51:32.012184 | orchestrator | 06:51:32.012 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-09-23 06:51:32.012226 | orchestrator | 06:51:32.012 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-23 06:51:32.012262 | orchestrator | 06:51:32.012 STDOUT terraform:  + attachment = (known after apply) 2025-09-23 06:51:32.012284 | orchestrator | 06:51:32.012 STDOUT terraform:  + availability_zone = "nova" 2025-09-23 06:51:32.012319 | orchestrator | 06:51:32.012 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.012354 | orchestrator | 06:51:32.012 STDOUT terraform:  + metadata = (known after apply) 2025-09-23 06:51:32.012391 | orchestrator | 06:51:32.012 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-09-23 06:51:32.012426 | orchestrator | 06:51:32.012 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.012446 | orchestrator | 06:51:32.012 STDOUT terraform:  + size = 20 2025-09-23 06:51:32.012471 | orchestrator | 06:51:32.012 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-23 06:51:32.012495 | orchestrator | 06:51:32.012 STDOUT terraform:  + volume_type = "ssd" 2025-09-23 06:51:32.012501 | orchestrator | 06:51:32.012 STDOUT terraform:  } 2025-09-23 06:51:32.012546 | orchestrator | 06:51:32.012 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-09-23 06:51:32.012588 | orchestrator | 06:51:32.012 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-23 06:51:32.012623 | orchestrator | 06:51:32.012 STDOUT terraform:  + attachment = (known after apply) 2025-09-23 06:51:32.012646 | orchestrator | 06:51:32.012 STDOUT terraform:  + availability_zone = "nova" 2025-09-23 06:51:32.012691 | orchestrator | 06:51:32.012 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.012726 | orchestrator | 06:51:32.012 STDOUT terraform:  + metadata = (known after apply) 2025-09-23 06:51:32.012763 | orchestrator | 06:51:32.012 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-09-23 06:51:32.012797 | orchestrator | 06:51:32.012 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.012809 | orchestrator | 06:51:32.012 STDOUT terraform:  + size = 20 2025-09-23 06:51:32.012836 | orchestrator | 06:51:32.012 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-23 06:51:32.012859 | orchestrator | 06:51:32.012 STDOUT terraform:  + volume_type = "ssd" 2025-09-23 06:51:32.012865 | orchestrator | 06:51:32.012 STDOUT terraform:  } 2025-09-23 06:51:32.012912 | orchestrator | 06:51:32.012 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-09-23 06:51:32.012969 | orchestrator | 06:51:32.012 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-23 06:51:32.013021 | orchestrator | 06:51:32.012 STDOUT terraform:  + attachment = (known after apply) 2025-09-23 06:51:32.013046 | orchestrator | 06:51:32.013 STDOUT terraform:  + availability_zone = "nova" 2025-09-23 06:51:32.013081 | orchestrator | 06:51:32.013 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.013117 | orchestrator | 06:51:32.013 STDOUT terraform:  + metadata = (known after apply) 2025-09-23 06:51:32.013154 | orchestrator | 06:51:32.013 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-09-23 06:51:32.013189 | orchestrator | 06:51:32.013 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.013214 | orchestrator | 06:51:32.013 STDOUT terraform:  + size = 20 2025-09-23 06:51:32.013235 | orchestrator | 06:51:32.013 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-23 06:51:32.013259 | orchestrator | 06:51:32.013 STDOUT terraform:  + volume_type = "ssd" 2025-09-23 06:51:32.013265 | orchestrator | 06:51:32.013 STDOUT terraform:  } 2025-09-23 06:51:32.013312 | orchestrator | 06:51:32.013 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-09-23 06:51:32.013354 | orchestrator | 06:51:32.013 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-23 06:51:32.013388 | orchestrator | 06:51:32.013 STDOUT terraform:  + attachment = (known after apply) 2025-09-23 06:51:32.013412 | orchestrator | 06:51:32.013 STDOUT terraform:  + availability_zone = "nova" 2025-09-23 06:51:32.013447 | orchestrator | 06:51:32.013 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.013484 | orchestrator | 06:51:32.013 STDOUT terraform:  + metadata = (known after apply) 2025-09-23 06:51:32.013523 | orchestrator | 06:51:32.013 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-09-23 06:51:32.013558 | orchestrator | 06:51:32.013 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.013577 | orchestrator | 06:51:32.013 STDOUT terraform:  + size = 20 2025-09-23 06:51:32.013600 | orchestrator | 06:51:32.013 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-23 06:51:32.013624 | orchestrator | 06:51:32.013 STDOUT terraform:  + volume_type = "ssd" 2025-09-23 06:51:32.013630 | orchestrator | 06:51:32.013 STDOUT terraform:  } 2025-09-23 06:51:32.013693 | orchestrator | 06:51:32.013 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-09-23 06:51:32.013734 | orchestrator | 06:51:32.013 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-23 06:51:32.013769 | orchestrator | 06:51:32.013 STDOUT terraform:  + attachment = (known after apply) 2025-09-23 06:51:32.013792 | orchestrator | 06:51:32.013 STDOUT terraform:  + availability_zone = "nova" 2025-09-23 06:51:32.013828 | orchestrator | 06:51:32.013 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.013863 | orchestrator | 06:51:32.013 STDOUT terraform:  + metadata = (known after apply) 2025-09-23 06:51:32.013904 | orchestrator | 06:51:32.013 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-09-23 06:51:32.013938 | orchestrator | 06:51:32.013 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.013959 | orchestrator | 06:51:32.013 STDOUT terraform:  + size = 20 2025-09-23 06:51:32.013982 | orchestrator | 06:51:32.013 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-23 06:51:32.014006 | orchestrator | 06:51:32.013 STDOUT terraform:  + volume_type = "ssd" 2025-09-23 06:51:32.014029 | orchestrator | 06:51:32.014 STDOUT terraform:  } 2025-09-23 06:51:32.014072 | orchestrator | 06:51:32.014 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-09-23 06:51:32.014114 | orchestrator | 06:51:32.014 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-23 06:51:32.014148 | orchestrator | 06:51:32.014 STDOUT terraform:  + attachment = (known after apply) 2025-09-23 06:51:32.014171 | orchestrator | 06:51:32.014 STDOUT terraform:  + availability_zone = "nova" 2025-09-23 06:51:32.014205 | orchestrator | 06:51:32.014 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.014240 | orchestrator | 06:51:32.014 STDOUT terraform:  + metadata = (known after apply) 2025-09-23 06:51:32.014277 | orchestrator | 06:51:32.014 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-09-23 06:51:32.014313 | orchestrator | 06:51:32.014 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.014335 | orchestrator | 06:51:32.014 STDOUT terraform:  + size = 20 2025-09-23 06:51:32.014358 | orchestrator | 06:51:32.014 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-23 06:51:32.014382 | orchestrator | 06:51:32.014 STDOUT terraform:  + volume_type = "ssd" 2025-09-23 06:51:32.014388 | orchestrator | 06:51:32.014 STDOUT terraform:  } 2025-09-23 06:51:32.014435 | orchestrator | 06:51:32.014 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-09-23 06:51:32.014474 | orchestrator | 06:51:32.014 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-09-23 06:51:32.014508 | orchestrator | 06:51:32.014 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-23 06:51:32.014541 | orchestrator | 06:51:32.014 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-23 06:51:32.014575 | orchestrator | 06:51:32.014 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-23 06:51:32.014615 | orchestrator | 06:51:32.014 STDOUT terraform:  + all_tags = (known after apply) 2025-09-23 06:51:32.014635 | orchestrator | 06:51:32.014 STDOUT terraform:  + availability_zone = "nova" 2025-09-23 06:51:32.014655 | orchestrator | 06:51:32.014 STDOUT terraform:  + config_drive = true 2025-09-23 06:51:32.014699 | orchestrator | 06:51:32.014 STDOUT terraform:  + created = (known after apply) 2025-09-23 06:51:32.014732 | orchestrator | 06:51:32.014 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-23 06:51:32.014763 | orchestrator | 06:51:32.014 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-09-23 06:51:32.014785 | orchestrator | 06:51:32.014 STDOUT terraform:  + force_delete = false 2025-09-23 06:51:32.014819 | orchestrator | 06:51:32.014 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-23 06:51:32.014856 | orchestrator | 06:51:32.014 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.014887 | orchestrator | 06:51:32.014 STDOUT terraform:  + image_id = (known after apply) 2025-09-23 06:51:32.014923 | orchestrator | 06:51:32.014 STDOUT terraform:  + image_name = (known after apply) 2025-09-23 06:51:32.014946 | orchestrator | 06:51:32.014 STDOUT terraform:  + key_pair = "testbed" 2025-09-23 06:51:32.014976 | orchestrator | 06:51:32.014 STDOUT terraform:  + name = "testbed-manager" 2025-09-23 06:51:32.015000 | orchestrator | 06:51:32.014 STDOUT terraform:  + power_state = "active" 2025-09-23 06:51:32.015034 | orchestrator | 06:51:32.014 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.015067 | orchestrator | 06:51:32.015 STDOUT terraform:  + security_groups = (known after apply) 2025-09-23 06:51:32.015090 | orchestrator | 06:51:32.015 STDOUT terraform:  + stop_before_destroy = false 2025-09-23 06:51:32.015124 | orchestrator | 06:51:32.015 STDOUT terraform:  + updated = (known after apply) 2025-09-23 06:51:32.015154 | orchestrator | 06:51:32.015 STDOUT terraform:  + user_data = (sensitive value) 2025-09-23 06:51:32.015173 | orchestrator | 06:51:32.015 STDOUT terraform:  + block_device { 2025-09-23 06:51:32.015195 | orchestrator | 06:51:32.015 STDOUT terraform:  + boot_index = 0 2025-09-23 06:51:32.015224 | orchestrator | 06:51:32.015 STDOUT terraform:  + delete_on_termination = false 2025-09-23 06:51:32.015253 | orchestrator | 06:51:32.015 STDOUT terraform:  + destination_type = "volume" 2025-09-23 06:51:32.015279 | orchestrator | 06:51:32.015 STDOUT terraform:  + multiattach = false 2025-09-23 06:51:32.015308 | orchestrator | 06:51:32.015 STDOUT terraform:  + source_type = "volume" 2025-09-23 06:51:32.015345 | orchestrator | 06:51:32.015 STDOUT terraform:  + uuid = (known after apply) 2025-09-23 06:51:32.015358 | orchestrator | 06:51:32.015 STDOUT terraform:  } 2025-09-23 06:51:32.015373 | orchestrator | 06:51:32.015 STDOUT terraform:  + network { 2025-09-23 06:51:32.015394 | orchestrator | 06:51:32.015 STDOUT terraform:  + access_network = false 2025-09-23 06:51:32.015424 | orchestrator | 06:51:32.015 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-23 06:51:32.015456 | orchestrator | 06:51:32.015 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-23 06:51:32.015486 | orchestrator | 06:51:32.015 STDOUT terraform:  + mac = (known after apply) 2025-09-23 06:51:32.015516 | orchestrator | 06:51:32.015 STDOUT terraform:  + name = (known after apply) 2025-09-23 06:51:32.015548 | orchestrator | 06:51:32.015 STDOUT terraform:  + port = (known after apply) 2025-09-23 06:51:32.015578 | orchestrator | 06:51:32.015 STDOUT terraform:  + uuid = (known after apply) 2025-09-23 06:51:32.015585 | orchestrator | 06:51:32.015 STDOUT terraform:  } 2025-09-23 06:51:32.015601 | orchestrator | 06:51:32.015 STDOUT terraform:  } 2025-09-23 06:51:32.015643 | orchestrator | 06:51:32.015 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-09-23 06:51:32.015695 | orchestrator | 06:51:32.015 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-23 06:51:32.015729 | orchestrator | 06:51:32.015 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-23 06:51:32.015771 | orchestrator | 06:51:32.015 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-23 06:51:32.015796 | orchestrator | 06:51:32.015 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-23 06:51:32.015830 | orchestrator | 06:51:32.015 STDOUT terraform:  + all_tags = (known after apply) 2025-09-23 06:51:32.015852 | orchestrator | 06:51:32.015 STDOUT terraform:  + availability_zone = "nova" 2025-09-23 06:51:32.015872 | orchestrator | 06:51:32.015 STDOUT terraform:  + config_drive = true 2025-09-23 06:51:32.015906 | orchestrator | 06:51:32.015 STDOUT terraform:  + created = (known after apply) 2025-09-23 06:51:32.015940 | orchestrator | 06:51:32.015 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-23 06:51:32.015970 | orchestrator | 06:51:32.015 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-23 06:51:32.015993 | orchestrator | 06:51:32.015 STDOUT terraform:  + force_delete = false 2025-09-23 06:51:32.016025 | orchestrator | 06:51:32.015 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-23 06:51:32.016061 | orchestrator | 06:51:32.016 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.016095 | orchestrator | 06:51:32.016 STDOUT terraform:  + image_id = (known after apply) 2025-09-23 06:51:32.016129 | orchestrator | 06:51:32.016 STDOUT terraform:  + image_name = (known after apply) 2025-09-23 06:51:32.016153 | orchestrator | 06:51:32.016 STDOUT terraform:  + key_pair = "testbed" 2025-09-23 06:51:32.016183 | orchestrator | 06:51:32.016 STDOUT terraform:  + name = "testbed-node-0" 2025-09-23 06:51:32.016207 | orchestrator | 06:51:32.016 STDOUT terraform:  + power_state = "active" 2025-09-23 06:51:32.016241 | orchestrator | 06:51:32.016 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.016274 | orchestrator | 06:51:32.016 STDOUT terraform:  + security_groups = (known after apply) 2025-09-23 06:51:32.016297 | orchestrator | 06:51:32.016 STDOUT terraform:  + stop_before_destroy = false 2025-09-23 06:51:32.016331 | orchestrator | 06:51:32.016 STDOUT terraform:  + updated = (known after apply) 2025-09-23 06:51:32.016382 | orchestrator | 06:51:32.016 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-23 06:51:32.016399 | orchestrator | 06:51:32.016 STDOUT terraform:  + block_device { 2025-09-23 06:51:32.016422 | orchestrator | 06:51:32.016 STDOUT terraform:  + boot_index = 0 2025-09-23 06:51:32.016449 | orchestrator | 06:51:32.016 STDOUT terraform:  + delete_on_termination = false 2025-09-23 06:51:32.016478 | orchestrator | 06:51:32.016 STDOUT terraform:  + destination_type = "volume" 2025-09-23 06:51:32.016505 | orchestrator | 06:51:32.016 STDOUT terraform:  + multiattach = false 2025-09-23 06:51:32.016535 | orchestrator | 06:51:32.016 STDOUT terraform:  + source_type = "volume" 2025-09-23 06:51:32.016573 | orchestrator | 06:51:32.016 STDOUT terraform:  + uuid = (known after apply) 2025-09-23 06:51:32.016580 | orchestrator | 06:51:32.016 STDOUT terraform:  } 2025-09-23 06:51:32.016596 | orchestrator | 06:51:32.016 STDOUT terraform:  + network { 2025-09-23 06:51:32.016620 | orchestrator | 06:51:32.016 STDOUT terraform:  + access_network = false 2025-09-23 06:51:32.016651 | orchestrator | 06:51:32.016 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-23 06:51:32.016697 | orchestrator | 06:51:32.016 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-23 06:51:32.016721 | orchestrator | 06:51:32.016 STDOUT terraform:  + mac = (known after apply) 2025-09-23 06:51:32.016751 | orchestrator | 06:51:32.016 STDOUT terraform:  + name = (known after apply) 2025-09-23 06:51:32.016781 | orchestrator | 06:51:32.016 STDOUT terraform:  + port = (known after apply) 2025-09-23 06:51:32.016812 | orchestrator | 06:51:32.016 STDOUT terraform:  + uuid = (known after apply) 2025-09-23 06:51:32.016827 | orchestrator | 06:51:32.016 STDOUT terraform:  } 2025-09-23 06:51:32.016834 | orchestrator | 06:51:32.016 STDOUT terraform:  } 2025-09-23 06:51:32.016879 | orchestrator | 06:51:32.016 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-09-23 06:51:32.016921 | orchestrator | 06:51:32.016 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-23 06:51:32.016954 | orchestrator | 06:51:32.016 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-23 06:51:32.016987 | orchestrator | 06:51:32.016 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-23 06:51:32.017020 | orchestrator | 06:51:32.016 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-23 06:51:32.017055 | orchestrator | 06:51:32.017 STDOUT terraform:  + all_tags = (known after apply) 2025-09-23 06:51:32.017078 | orchestrator | 06:51:32.017 STDOUT terraform:  + availability_zone = "nova" 2025-09-23 06:51:32.017098 | orchestrator | 06:51:32.017 STDOUT terraform:  + config_drive = true 2025-09-23 06:51:32.017132 | orchestrator | 06:51:32.017 STDOUT terraform:  + created = (known after apply) 2025-09-23 06:51:32.017165 | orchestrator | 06:51:32.017 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-23 06:51:32.017194 | orchestrator | 06:51:32.017 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-23 06:51:32.017218 | orchestrator | 06:51:32.017 STDOUT terraform:  + force_delete = false 2025-09-23 06:51:32.017257 | orchestrator | 06:51:32.017 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-23 06:51:32.017288 | orchestrator | 06:51:32.017 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.017322 | orchestrator | 06:51:32.017 STDOUT terraform:  + image_id = (known after apply) 2025-09-23 06:51:32.017355 | orchestrator | 06:51:32.017 STDOUT terraform:  + image_name = (known after apply) 2025-09-23 06:51:32.017379 | orchestrator | 06:51:32.017 STDOUT terraform:  + key_pair = "testbed" 2025-09-23 06:51:32.017408 | orchestrator | 06:51:32.017 STDOUT terraform:  + name = "testbed-node-1" 2025-09-23 06:51:32.017433 | orchestrator | 06:51:32.017 STDOUT terraform:  + power_state = "active" 2025-09-23 06:51:32.017481 | orchestrator | 06:51:32.017 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.017515 | orchestrator | 06:51:32.017 STDOUT terraform:  + security_groups = (known after apply) 2025-09-23 06:51:32.017537 | orchestrator | 06:51:32.017 STDOUT terraform:  + stop_before_destroy = false 2025-09-23 06:51:32.017572 | orchestrator | 06:51:32.017 STDOUT terraform:  + updated = (known after apply) 2025-09-23 06:51:32.017621 | orchestrator | 06:51:32.017 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-23 06:51:32.017628 | orchestrator | 06:51:32.017 STDOUT terraform:  + block_device { 2025-09-23 06:51:32.017656 | orchestrator | 06:51:32.017 STDOUT terraform:  + boot_index = 0 2025-09-23 06:51:32.017702 | orchestrator | 06:51:32.017 STDOUT terraform:  + delete_on_termination = false 2025-09-23 06:51:32.017728 | orchestrator | 06:51:32.017 STDOUT terraform:  + destination_type = "volume" 2025-09-23 06:51:32.017756 | orchestrator | 06:51:32.017 STDOUT terraform:  + multiattach = false 2025-09-23 06:51:32.017784 | orchestrator | 06:51:32.017 STDOUT terraform:  + source_type = "volume" 2025-09-23 06:51:32.017821 | orchestrator | 06:51:32.017 STDOUT terraform:  + uuid = (known after apply) 2025-09-23 06:51:32.017828 | orchestrator | 06:51:32.017 STDOUT terraform:  } 2025-09-23 06:51:32.017845 | orchestrator | 06:51:32.017 STDOUT terraform:  + network { 2025-09-23 06:51:32.017864 | orchestrator | 06:51:32.017 STDOUT terraform:  + access_network = false 2025-09-23 06:51:32.017893 | orchestrator | 06:51:32.017 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-23 06:51:32.017923 | orchestrator | 06:51:32.017 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-23 06:51:32.017954 | orchestrator | 06:51:32.017 STDOUT terraform:  + mac = (known after apply) 2025-09-23 06:51:32.017984 | orchestrator | 06:51:32.017 STDOUT terraform:  + name = (known after apply) 2025-09-23 06:51:32.018039 | orchestrator | 06:51:32.017 STDOUT terraform:  + port = (known after apply) 2025-09-23 06:51:32.018057 | orchestrator | 06:51:32.018 STDOUT terraform:  + uuid = (known after apply) 2025-09-23 06:51:32.018063 | orchestrator | 06:51:32.018 STDOUT terraform:  } 2025-09-23 06:51:32.018081 | orchestrator | 06:51:32.018 STDOUT terraform:  } 2025-09-23 06:51:32.018180 | orchestrator | 06:51:32.018 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-09-23 06:51:32.018224 | orchestrator | 06:51:32.018 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-23 06:51:32.018258 | orchestrator | 06:51:32.018 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-23 06:51:32.018291 | orchestrator | 06:51:32.018 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-23 06:51:32.018325 | orchestrator | 06:51:32.018 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-23 06:51:32.018358 | orchestrator | 06:51:32.018 STDOUT terraform:  + all_tags = (known after apply) 2025-09-23 06:51:32.018381 | orchestrator | 06:51:32.018 STDOUT terraform:  + availability_zone = "nova" 2025-09-23 06:51:32.018401 | orchestrator | 06:51:32.018 STDOUT terraform:  + config_drive = true 2025-09-23 06:51:32.018434 | orchestrator | 06:51:32.018 STDOUT terraform:  + created = (known after apply) 2025-09-23 06:51:32.018468 | orchestrator | 06:51:32.018 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-23 06:51:32.018497 | orchestrator | 06:51:32.018 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-23 06:51:32.018520 | orchestrator | 06:51:32.018 STDOUT terraform:  + force_delete = false 2025-09-23 06:51:32.018555 | orchestrator | 06:51:32.018 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-23 06:51:32.018589 | orchestrator | 06:51:32.018 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.018624 | orchestrator | 06:51:32.018 STDOUT terraform:  + image_id = (known after apply) 2025-09-23 06:51:32.018657 | orchestrator | 06:51:32.018 STDOUT terraform:  + image_name = (known after apply) 2025-09-23 06:51:32.018691 | orchestrator | 06:51:32.018 STDOUT terraform:  + key_pair = "testbed" 2025-09-23 06:51:32.018722 | orchestrator | 06:51:32.018 STDOUT terraform:  + name = "testbed-node-2" 2025-09-23 06:51:32.018744 | orchestrator | 06:51:32.018 STDOUT terraform:  + power_state = "active" 2025-09-23 06:51:32.018778 | orchestrator | 06:51:32.018 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.018810 | orchestrator | 06:51:32.018 STDOUT terraform:  + security_groups = (known after apply) 2025-09-23 06:51:32.018832 | orchestrator | 06:51:32.018 STDOUT terraform:  + stop_before_destroy = false 2025-09-23 06:51:32.018867 | orchestrator | 06:51:32.018 STDOUT terraform:  + updated = (known after apply) 2025-09-23 06:51:32.018915 | orchestrator | 06:51:32.018 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-23 06:51:32.018931 | orchestrator | 06:51:32.018 STDOUT terraform:  + block_device { 2025-09-23 06:51:32.018955 | orchestrator | 06:51:32.018 STDOUT terraform:  + boot_index = 0 2025-09-23 06:51:32.018983 | orchestrator | 06:51:32.018 STDOUT terraform:  + delete_on_termination = false 2025-09-23 06:51:32.019016 | orchestrator | 06:51:32.018 STDOUT terraform:  + destination_type = "volume" 2025-09-23 06:51:32.019038 | orchestrator | 06:51:32.019 STDOUT terraform:  + multiattach = false 2025-09-23 06:51:32.019067 | orchestrator | 06:51:32.019 STDOUT terraform:  + source_type = "volume" 2025-09-23 06:51:32.019105 | orchestrator | 06:51:32.019 STDOUT terraform:  + uuid = (known after apply) 2025-09-23 06:51:32.019116 | orchestrator | 06:51:32.019 STDOUT terraform:  } 2025-09-23 06:51:32.019121 | orchestrator | 06:51:32.019 STDOUT terraform:  + network { 2025-09-23 06:51:32.019144 | orchestrator | 06:51:32.019 STDOUT terraform:  + access_network = false 2025-09-23 06:51:32.019174 | orchestrator | 06:51:32.019 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-23 06:51:32.019204 | orchestrator | 06:51:32.019 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-23 06:51:32.019233 | orchestrator | 06:51:32.019 STDOUT terraform:  + mac = (known after apply) 2025-09-23 06:51:32.019264 | orchestrator | 06:51:32.019 STDOUT terraform:  + name = (known after apply) 2025-09-23 06:51:32.019293 | orchestrator | 06:51:32.019 STDOUT terraform:  + port = (known after apply) 2025-09-23 06:51:32.019324 | orchestrator | 06:51:32.019 STDOUT terraform:  + uuid = (known after apply) 2025-09-23 06:51:32.019331 | orchestrator | 06:51:32.019 STDOUT terraform:  } 2025-09-23 06:51:32.019348 | orchestrator | 06:51:32.019 STDOUT terraform:  } 2025-09-23 06:51:32.019388 | orchestrator | 06:51:32.019 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-09-23 06:51:32.019429 | orchestrator | 06:51:32.019 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-23 06:51:32.019463 | orchestrator | 06:51:32.019 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-23 06:51:32.019496 | orchestrator | 06:51:32.019 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-23 06:51:32.019529 | orchestrator | 06:51:32.019 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-23 06:51:32.019567 | orchestrator | 06:51:32.019 STDOUT terraform:  + all_tags = (known after apply) 2025-09-23 06:51:32.019590 | orchestrator | 06:51:32.019 STDOUT terraform:  + availability_zone = "nova" 2025-09-23 06:51:32.019610 | orchestrator | 06:51:32.019 STDOUT terraform:  + config_drive = true 2025-09-23 06:51:32.019644 | orchestrator | 06:51:32.019 STDOUT terraform:  + created = (known after apply) 2025-09-23 06:51:32.019694 | orchestrator | 06:51:32.019 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-23 06:51:32.019716 | orchestrator | 06:51:32.019 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-23 06:51:32.019739 | orchestrator | 06:51:32.019 STDOUT terraform:  + force_delete = false 2025-09-23 06:51:32.019772 | orchestrator | 06:51:32.019 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-23 06:51:32.026070 | orchestrator | 06:51:32.019 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.026101 | orchestrator | 06:51:32.019 STDOUT terraform:  + image_id = (known after apply) 2025-09-23 06:51:32.026106 | orchestrator | 06:51:32.019 STDOUT terraform:  + image_name = (known after apply) 2025-09-23 06:51:32.026111 | orchestrator | 06:51:32.019 STDOUT terraform:  + key_pair = "testbed" 2025-09-23 06:51:32.026115 | orchestrator | 06:51:32.020 STDOUT terraform:  + name = "testbed-node-3" 2025-09-23 06:51:32.026119 | orchestrator | 06:51:32.020 STDOUT terraform:  + power_state = "active" 2025-09-23 06:51:32.026131 | orchestrator | 06:51:32.020 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.026135 | orchestrator | 06:51:32.020 STDOUT terraform:  + security_groups = (known after apply) 2025-09-23 06:51:32.026139 | orchestrator | 06:51:32.020 STDOUT terraform:  + stop_before_destroy = false 2025-09-23 06:51:32.026143 | orchestrator | 06:51:32.020 STDOUT terraform:  + updated = (known after apply) 2025-09-23 06:51:32.026147 | orchestrator | 06:51:32.020 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-23 06:51:32.026151 | orchestrator | 06:51:32.020 STDOUT terraform:  + block_device { 2025-09-23 06:51:32.026155 | orchestrator | 06:51:32.020 STDOUT terraform:  + boot_index = 0 2025-09-23 06:51:32.026159 | orchestrator | 06:51:32.020 STDOUT terraform:  + delete_on_termination = false 2025-09-23 06:51:32.026163 | orchestrator | 06:51:32.020 STDOUT terraform:  + destination_type = "volume" 2025-09-23 06:51:32.026167 | orchestrator | 06:51:32.020 STDOUT terraform:  + multiattach = false 2025-09-23 06:51:32.026170 | orchestrator | 06:51:32.020 STDOUT terraform:  + source_type = "volume" 2025-09-23 06:51:32.026174 | orchestrator | 06:51:32.020 STDOUT terraform:  + uuid = (known after apply) 2025-09-23 06:51:32.026178 | orchestrator | 06:51:32.020 STDOUT terraform:  } 2025-09-23 06:51:32.026182 | orchestrator | 06:51:32.020 STDOUT terraform:  + network { 2025-09-23 06:51:32.026186 | orchestrator | 06:51:32.020 STDOUT terraform:  + access_network = false 2025-09-23 06:51:32.026190 | orchestrator | 06:51:32.020 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-23 06:51:32.026194 | orchestrator | 06:51:32.020 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-23 06:51:32.026197 | orchestrator | 06:51:32.020 STDOUT terraform:  + mac = (known after apply) 2025-09-23 06:51:32.026201 | orchestrator | 06:51:32.020 STDOUT terraform:  + name = (known after apply) 2025-09-23 06:51:32.026205 | orchestrator | 06:51:32.020 STDOUT terraform:  + port = (known after apply) 2025-09-23 06:51:32.026209 | orchestrator | 06:51:32.020 STDOUT terraform:  + uuid = (known after apply) 2025-09-23 06:51:32.026212 | orchestrator | 06:51:32.020 STDOUT terraform:  } 2025-09-23 06:51:32.026217 | orchestrator | 06:51:32.020 STDOUT terraform:  } 2025-09-23 06:51:32.026221 | orchestrator | 06:51:32.020 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-09-23 06:51:32.026224 | orchestrator | 06:51:32.020 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-23 06:51:32.026228 | orchestrator | 06:51:32.020 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-23 06:51:32.026232 | orchestrator | 06:51:32.020 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-23 06:51:32.026236 | orchestrator | 06:51:32.020 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-23 06:51:32.026240 | orchestrator | 06:51:32.020 STDOUT terraform:  + all_tags = (known after apply) 2025-09-23 06:51:32.026243 | orchestrator | 06:51:32.020 STDOUT terraform:  + availability_zone = "nova" 2025-09-23 06:51:32.026251 | orchestrator | 06:51:32.020 STDOUT terraform:  + config_drive = true 2025-09-23 06:51:32.026259 | orchestrator | 06:51:32.020 STDOUT terraform:  + created = (known after apply) 2025-09-23 06:51:32.026270 | orchestrator | 06:51:32.020 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-23 06:51:32.026274 | orchestrator | 06:51:32.020 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-23 06:51:32.026279 | orchestrator | 06:51:32.021 STDOUT terraform:  + force_delete = false 2025-09-23 06:51:32.026282 | orchestrator | 06:51:32.021 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-23 06:51:32.026286 | orchestrator | 06:51:32.021 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.026290 | orchestrator | 06:51:32.021 STDOUT terraform:  + image_id = (known after apply) 2025-09-23 06:51:32.026296 | orchestrator | 06:51:32.021 STDOUT terraform:  + image_name = (known after apply) 2025-09-23 06:51:32.026300 | orchestrator | 06:51:32.021 STDOUT terraform:  + key_pair = "testbed" 2025-09-23 06:51:32.026304 | orchestrator | 06:51:32.021 STDOUT terraform:  + name = "testbed-node-4" 2025-09-23 06:51:32.026307 | orchestrator | 06:51:32.021 STDOUT terraform:  + power_state = "active" 2025-09-23 06:51:32.026311 | orchestrator | 06:51:32.021 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.026315 | orchestrator | 06:51:32.021 STDOUT terraform:  + security_groups = (known after apply) 2025-09-23 06:51:32.026319 | orchestrator | 06:51:32.021 STDOUT terraform:  + stop_before_destroy = false 2025-09-23 06:51:32.026322 | orchestrator | 06:51:32.021 STDOUT terraform:  + updated = (known after apply) 2025-09-23 06:51:32.026326 | orchestrator | 06:51:32.021 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-23 06:51:32.026330 | orchestrator | 06:51:32.021 STDOUT terraform:  + block_device { 2025-09-23 06:51:32.026334 | orchestrator | 06:51:32.021 STDOUT terraform:  + boot_index = 0 2025-09-23 06:51:32.026338 | orchestrator | 06:51:32.021 STDOUT terraform:  + delete_on_termination = false 2025-09-23 06:51:32.026341 | orchestrator | 06:51:32.021 STDOUT terraform:  + destination_type = "volume" 2025-09-23 06:51:32.026345 | orchestrator | 06:51:32.021 STDOUT terraform:  + multiattach = false 2025-09-23 06:51:32.026349 | orchestrator | 06:51:32.021 STDOUT terraform:  + source_type = "volume" 2025-09-23 06:51:32.026353 | orchestrator | 06:51:32.021 STDOUT terraform:  + uuid = (known after apply) 2025-09-23 06:51:32.026357 | orchestrator | 06:51:32.021 STDOUT terraform:  } 2025-09-23 06:51:32.026360 | orchestrator | 06:51:32.021 STDOUT terraform:  + network { 2025-09-23 06:51:32.026364 | orchestrator | 06:51:32.021 STDOUT terraform:  + access_network = false 2025-09-23 06:51:32.026368 | orchestrator | 06:51:32.021 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-23 06:51:32.026372 | orchestrator | 06:51:32.021 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-23 06:51:32.026376 | orchestrator | 06:51:32.021 STDOUT terraform:  + mac = (known after apply) 2025-09-23 06:51:32.026384 | orchestrator | 06:51:32.021 STDOUT terraform:  + name = (known after apply) 2025-09-23 06:51:32.026388 | orchestrator | 06:51:32.021 STDOUT terraform:  + port = (known after apply) 2025-09-23 06:51:32.026391 | orchestrator | 06:51:32.021 STDOUT terraform:  + uuid = (known after apply) 2025-09-23 06:51:32.026395 | orchestrator | 06:51:32.021 STDOUT terraform:  } 2025-09-23 06:51:32.026399 | orchestrator | 06:51:32.021 STDOUT terraform:  } 2025-09-23 06:51:32.026403 | orchestrator | 06:51:32.021 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-09-23 06:51:32.026407 | orchestrator | 06:51:32.021 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-23 06:51:32.026411 | orchestrator | 06:51:32.021 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-23 06:51:32.026415 | orchestrator | 06:51:32.021 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-23 06:51:32.026418 | orchestrator | 06:51:32.021 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-23 06:51:32.026425 | orchestrator | 06:51:32.021 STDOUT terraform:  + all_tags = (known after apply) 2025-09-23 06:51:32.026429 | orchestrator | 06:51:32.021 STDOUT terraform:  + availability_zone = "nova" 2025-09-23 06:51:32.026433 | orchestrator | 06:51:32.021 STDOUT terraform:  + config_drive = true 2025-09-23 06:51:32.026437 | orchestrator | 06:51:32.021 STDOUT terraform:  + created = (known after apply) 2025-09-23 06:51:32.026441 | orchestrator | 06:51:32.021 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-23 06:51:32.026444 | orchestrator | 06:51:32.022 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-23 06:51:32.026448 | orchestrator | 06:51:32.022 STDOUT terraform:  + force_delete = false 2025-09-23 06:51:32.026452 | orchestrator | 06:51:32.022 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-23 06:51:32.026456 | orchestrator | 06:51:32.022 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.026460 | orchestrator | 06:51:32.022 STDOUT terraform:  + image_id = (known after apply) 2025-09-23 06:51:32.026463 | orchestrator | 06:51:32.022 STDOUT terraform:  + image_name = (known after apply) 2025-09-23 06:51:32.026467 | orchestrator | 06:51:32.022 STDOUT terraform:  + key_pair = "testbed" 2025-09-23 06:51:32.026471 | orchestrator | 06:51:32.022 STDOUT terraform:  + name = "testbed-node-5" 2025-09-23 06:51:32.026475 | orchestrator | 06:51:32.022 STDOUT terraform:  + power_state = "active" 2025-09-23 06:51:32.026478 | orchestrator | 06:51:32.022 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.026482 | orchestrator | 06:51:32.022 STDOUT terraform:  + security_groups = (known after apply) 2025-09-23 06:51:32.026486 | orchestrator | 06:51:32.022 STDOUT terraform:  + stop_before_destroy = false 2025-09-23 06:51:32.026490 | orchestrator | 06:51:32.022 STDOUT terraform:  + updated = (known after apply) 2025-09-23 06:51:32.026493 | orchestrator | 06:51:32.022 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-23 06:51:32.026497 | orchestrator | 06:51:32.022 STDOUT terraform:  + block_device { 2025-09-23 06:51:32.026507 | orchestrator | 06:51:32.022 STDOUT terraform:  + boot_index = 0 2025-09-23 06:51:32.026511 | orchestrator | 06:51:32.022 STDOUT terraform:  + delete_on_termination = false 2025-09-23 06:51:32.026515 | orchestrator | 06:51:32.022 STDOUT terraform:  + destination_type = "volume" 2025-09-23 06:51:32.026519 | orchestrator | 06:51:32.022 STDOUT terraform:  + multiattach = false 2025-09-23 06:51:32.026522 | orchestrator | 06:51:32.022 STDOUT terraform:  + source_type = "volume" 2025-09-23 06:51:32.026526 | orchestrator | 06:51:32.022 STDOUT terraform:  + uuid = (known after apply) 2025-09-23 06:51:32.026530 | orchestrator | 06:51:32.022 STDOUT terraform:  } 2025-09-23 06:51:32.026534 | orchestrator | 06:51:32.022 STDOUT terraform:  + network { 2025-09-23 06:51:32.026537 | orchestrator | 06:51:32.022 STDOUT terraform:  + access_network = false 2025-09-23 06:51:32.026541 | orchestrator | 06:51:32.022 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-23 06:51:32.026545 | orchestrator | 06:51:32.022 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-23 06:51:32.026549 | orchestrator | 06:51:32.022 STDOUT terraform:  + mac = (known after apply) 2025-09-23 06:51:32.026552 | orchestrator | 06:51:32.022 STDOUT terraform:  + name = (known after apply) 2025-09-23 06:51:32.026559 | orchestrator | 06:51:32.022 STDOUT terraform:  + port = (known after apply) 2025-09-23 06:51:32.026563 | orchestrator | 06:51:32.022 STDOUT terraform:  + uuid = (known after apply) 2025-09-23 06:51:32.026566 | orchestrator | 06:51:32.022 STDOUT terraform:  } 2025-09-23 06:51:32.026570 | orchestrator | 06:51:32.022 STDOUT terraform:  } 2025-09-23 06:51:32.026574 | orchestrator | 06:51:32.022 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-09-23 06:51:32.026580 | orchestrator | 06:51:32.022 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-09-23 06:51:32.026584 | orchestrator | 06:51:32.022 STDOUT terraform:  + fingerprint = (known after apply) 2025-09-23 06:51:32.026588 | orchestrator | 06:51:32.022 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.026592 | orchestrator | 06:51:32.022 STDOUT terraform:  + name = "testbed" 2025-09-23 06:51:32.026596 | orchestrator | 06:51:32.022 STDOUT terraform:  + private_key = (sensitive value) 2025-09-23 06:51:32.026599 | orchestrator | 06:51:32.022 STDOUT terraform:  + public_key = (known after apply) 2025-09-23 06:51:32.026603 | orchestrator | 06:51:32.022 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.026610 | orchestrator | 06:51:32.022 STDOUT terraform:  + user_id = (known after apply) 2025-09-23 06:51:32.026614 | orchestrator | 06:51:32.023 STDOUT terraform:  } 2025-09-23 06:51:32.026617 | orchestrator | 06:51:32.023 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-09-23 06:51:32.026622 | orchestrator | 06:51:32.023 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-23 06:51:32.026625 | orchestrator | 06:51:32.023 STDOUT terraform:  + device = (known after apply) 2025-09-23 06:51:32.026629 | orchestrator | 06:51:32.023 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.026637 | orchestrator | 06:51:32.023 STDOUT terraform:  + instance_id = (known after apply) 2025-09-23 06:51:32.026641 | orchestrator | 06:51:32.023 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.026645 | orchestrator | 06:51:32.023 STDOUT terraform:  + volume_id = (known after apply) 2025-09-23 06:51:32.026648 | orchestrator | 06:51:32.023 STDOUT terraform:  } 2025-09-23 06:51:32.026652 | orchestrator | 06:51:32.023 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-09-23 06:51:32.026656 | orchestrator | 06:51:32.023 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-23 06:51:32.026660 | orchestrator | 06:51:32.023 STDOUT terraform:  + device = (known after apply) 2025-09-23 06:51:32.026676 | orchestrator | 06:51:32.023 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.026680 | orchestrator | 06:51:32.023 STDOUT terraform:  + instance_id = (known after apply) 2025-09-23 06:51:32.026684 | orchestrator | 06:51:32.023 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.026687 | orchestrator | 06:51:32.023 STDOUT terraform:  + volume_id = (known after apply) 2025-09-23 06:51:32.026691 | orchestrator | 06:51:32.023 STDOUT terraform:  } 2025-09-23 06:51:32.026695 | orchestrator | 06:51:32.023 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-09-23 06:51:32.026699 | orchestrator | 06:51:32.023 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-23 06:51:32.026702 | orchestrator | 06:51:32.023 STDOUT terraform:  + device = (known after apply) 2025-09-23 06:51:32.026706 | orchestrator | 06:51:32.023 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.026710 | orchestrator | 06:51:32.023 STDOUT terraform:  + instance_id = (known after apply) 2025-09-23 06:51:32.026714 | orchestrator | 06:51:32.023 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.026717 | orchestrator | 06:51:32.023 STDOUT terraform:  + volume_id = (known after apply) 2025-09-23 06:51:32.026721 | orchestrator | 06:51:32.023 STDOUT terraform:  } 2025-09-23 06:51:32.026725 | orchestrator | 06:51:32.023 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-09-23 06:51:32.026729 | orchestrator | 06:51:32.023 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-23 06:51:32.026733 | orchestrator | 06:51:32.023 STDOUT terraform:  + device = (known after apply) 2025-09-23 06:51:32.026736 | orchestrator | 06:51:32.023 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.026740 | orchestrator | 06:51:32.023 STDOUT terraform:  + instance_id = (known after apply) 2025-09-23 06:51:32.026749 | orchestrator | 06:51:32.023 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.026753 | orchestrator | 06:51:32.023 STDOUT terraform:  + volume_id = (known after apply) 2025-09-23 06:51:32.026757 | orchestrator | 06:51:32.023 STDOUT terraform:  } 2025-09-23 06:51:32.026760 | orchestrator | 06:51:32.023 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-09-23 06:51:32.026768 | orchestrator | 06:51:32.023 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-23 06:51:32.026771 | orchestrator | 06:51:32.023 STDOUT terraform:  + device = (known after apply) 2025-09-23 06:51:32.026775 | orchestrator | 06:51:32.023 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.026779 | orchestrator | 06:51:32.024 STDOUT terraform:  + instance_id = (known after apply) 2025-09-23 06:51:32.026783 | orchestrator | 06:51:32.024 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.026787 | orchestrator | 06:51:32.024 STDOUT terraform:  + volume_id = (known after apply) 2025-09-23 06:51:32.026790 | orchestrator | 06:51:32.024 STDOUT terraform:  } 2025-09-23 06:51:32.026794 | orchestrator | 06:51:32.024 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-09-23 06:51:32.026798 | orchestrator | 06:51:32.024 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-23 06:51:32.026802 | orchestrator | 06:51:32.024 STDOUT terraform:  + device = (known after apply) 2025-09-23 06:51:32.026806 | orchestrator | 06:51:32.024 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.026809 | orchestrator | 06:51:32.024 STDOUT terraform:  + instance_id = (known after apply) 2025-09-23 06:51:32.026813 | orchestrator | 06:51:32.024 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.026817 | orchestrator | 06:51:32.024 STDOUT terraform:  + volume_id = (known after apply) 2025-09-23 06:51:32.026821 | orchestrator | 06:51:32.024 STDOUT terraform:  } 2025-09-23 06:51:32.026825 | orchestrator | 06:51:32.024 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-09-23 06:51:32.026828 | orchestrator | 06:51:32.024 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-23 06:51:32.026832 | orchestrator | 06:51:32.024 STDOUT terraform:  + device = (known after apply) 2025-09-23 06:51:32.026836 | orchestrator | 06:51:32.024 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.026840 | orchestrator | 06:51:32.024 STDOUT terraform:  + instance_id = (known after apply) 2025-09-23 06:51:32.026844 | orchestrator | 06:51:32.024 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.026847 | orchestrator | 06:51:32.024 STDOUT terraform:  + volume_id = (known after apply) 2025-09-23 06:51:32.026851 | orchestrator | 06:51:32.024 STDOUT terraform:  } 2025-09-23 06:51:32.026855 | orchestrator | 06:51:32.024 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-09-23 06:51:32.026859 | orchestrator | 06:51:32.024 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-23 06:51:32.026863 | orchestrator | 06:51:32.024 STDOUT terraform:  + device = (known after apply) 2025-09-23 06:51:32.026866 | orchestrator | 06:51:32.024 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.026870 | orchestrator | 06:51:32.024 STDOUT terraform:  + instance_id = (known after apply) 2025-09-23 06:51:32.026874 | orchestrator | 06:51:32.024 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.026881 | orchestrator | 06:51:32.024 STDOUT terraform:  + volume_id = (known after apply) 2025-09-23 06:51:32.026884 | orchestrator | 06:51:32.024 STDOUT terraform:  } 2025-09-23 06:51:32.026888 | orchestrator | 06:51:32.024 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-09-23 06:51:32.026894 | orchestrator | 06:51:32.024 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-23 06:51:32.026901 | orchestrator | 06:51:32.024 STDOUT terraform:  + device = (known after apply) 2025-09-23 06:51:32.026905 | orchestrator | 06:51:32.024 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.026908 | orchestrator | 06:51:32.024 STDOUT terraform:  + instance_id = (known after apply) 2025-09-23 06:51:32.026912 | orchestrator | 06:51:32.024 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.026916 | orchestrator | 06:51:32.024 STDOUT terraform:  + volume_id = (known after apply) 2025-09-23 06:51:32.026920 | orchestrator | 06:51:32.024 STDOUT terraform:  } 2025-09-23 06:51:32.026926 | orchestrator | 06:51:32.024 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-09-23 06:51:32.026930 | orchestrator | 06:51:32.024 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-09-23 06:51:32.026934 | orchestrator | 06:51:32.025 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-23 06:51:32.026938 | orchestrator | 06:51:32.025 STDOUT terraform:  + floating_ip = (known after apply) 2025-09-23 06:51:32.026942 | orchestrator | 06:51:32.025 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.026945 | orchestrator | 06:51:32.025 STDOUT terraform:  + port_id = (known after apply) 2025-09-23 06:51:32.026949 | orchestrator | 06:51:32.025 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.026953 | orchestrator | 06:51:32.025 STDOUT terraform:  } 2025-09-23 06:51:32.026957 | orchestrator | 06:51:32.025 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-09-23 06:51:32.026961 | orchestrator | 06:51:32.025 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-09-23 06:51:32.026965 | orchestrator | 06:51:32.025 STDOUT terraform:  + address = (known after apply) 2025-09-23 06:51:32.026968 | orchestrator | 06:51:32.025 STDOUT terraform:  + all_tags = (known after apply) 2025-09-23 06:51:32.026972 | orchestrator | 06:51:32.025 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-23 06:51:32.026976 | orchestrator | 06:51:32.025 STDOUT terraform:  + dns_name = (known after apply) 2025-09-23 06:51:32.026980 | orchestrator | 06:51:32.025 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-23 06:51:32.026984 | orchestrator | 06:51:32.025 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.026987 | orchestrator | 06:51:32.025 STDOUT terraform:  + pool = "public" 2025-09-23 06:51:32.026991 | orchestrator | 06:51:32.025 STDOUT terraform:  + port_id = (known after apply) 2025-09-23 06:51:32.026995 | orchestrator | 06:51:32.025 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.026999 | orchestrator | 06:51:32.025 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-23 06:51:32.027005 | orchestrator | 06:51:32.025 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-23 06:51:32.027009 | orchestrator | 06:51:32.025 STDOUT terraform:  } 2025-09-23 06:51:32.027013 | orchestrator | 06:51:32.025 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-09-23 06:51:32.027017 | orchestrator | 06:51:32.025 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-09-23 06:51:32.027021 | orchestrator | 06:51:32.025 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-23 06:51:32.027024 | orchestrator | 06:51:32.025 STDOUT terraform:  + all_tags = (known after apply) 2025-09-23 06:51:32.027028 | orchestrator | 06:51:32.025 STDOUT terraform:  + availability_zone_hints = [ 2025-09-23 06:51:32.027032 | orchestrator | 06:51:32.025 STDOUT terraform:  + "nova", 2025-09-23 06:51:32.027036 | orchestrator | 06:51:32.025 STDOUT terraform:  ] 2025-09-23 06:51:32.027040 | orchestrator | 06:51:32.025 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-23 06:51:32.027043 | orchestrator | 06:51:32.025 STDOUT terraform:  + external = (known after apply) 2025-09-23 06:51:32.027049 | orchestrator | 06:51:32.025 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.027053 | orchestrator | 06:51:32.025 STDOUT terraform:  + mtu = (known after apply) 2025-09-23 06:51:32.027057 | orchestrator | 06:51:32.025 STDOUT terraform:  + name = "net-testbed-management" 2025-09-23 06:51:32.027060 | orchestrator | 06:51:32.025 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-23 06:51:32.027064 | orchestrator | 06:51:32.025 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-23 06:51:32.027068 | orchestrator | 06:51:32.025 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.027075 | orchestrator | 06:51:32.025 STDOUT terraform:  + shared = (known after apply) 2025-09-23 06:51:32.027079 | orchestrator | 06:51:32.025 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-23 06:51:32.027083 | orchestrator | 06:51:32.025 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-09-23 06:51:32.027086 | orchestrator | 06:51:32.025 STDOUT terraform:  + segments (known after apply) 2025-09-23 06:51:32.027090 | orchestrator | 06:51:32.025 STDOUT terraform:  } 2025-09-23 06:51:32.027982 | orchestrator | 06:51:32.025 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-09-23 06:51:32.028080 | orchestrator | 06:51:32.028 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-09-23 06:51:32.028132 | orchestrator | 06:51:32.028 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-23 06:51:32.028177 | orchestrator | 06:51:32.028 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-23 06:51:32.028229 | orchestrator | 06:51:32.028 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-23 06:51:32.028273 | orchestrator | 06:51:32.028 STDOUT terraform:  + all_tags = (known after apply) 2025-09-23 06:51:32.028317 | orchestrator | 06:51:32.028 STDOUT terraform:  + device_id = (known after apply) 2025-09-23 06:51:32.028369 | orchestrator | 06:51:32.028 STDOUT terraform:  + device_owner = (known after apply) 2025-09-23 06:51:32.028411 | orchestrator | 06:51:32.028 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-23 06:51:32.028454 | orchestrator | 06:51:32.028 STDOUT terraform:  + dns_name = (known after apply) 2025-09-23 06:51:32.028498 | orchestrator | 06:51:32.028 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.028543 | orchestrator | 06:51:32.028 STDOUT terraform:  + mac_address = (known after apply) 2025-09-23 06:51:32.028586 | orchestrator | 06:51:32.028 STDOUT terraform:  + network_id = (known after apply) 2025-09-23 06:51:32.028628 | orchestrator | 06:51:32.028 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-23 06:51:32.028688 | orchestrator | 06:51:32.028 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-23 06:51:32.028733 | orchestrator | 06:51:32.028 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.028775 | orchestrator | 06:51:32.028 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-23 06:51:32.028817 | orchestrator | 06:51:32.028 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-23 06:51:32.028845 | orchestrator | 06:51:32.028 STDOUT terraform:  + allowed_address_pairs { 2025-09-23 06:51:32.028881 | orchestrator | 06:51:32.028 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-23 06:51:32.028902 | orchestrator | 06:51:32.028 STDOUT terraform:  } 2025-09-23 06:51:32.028930 | orchestrator | 06:51:32.028 STDOUT terraform:  + allowed_address_pairs { 2025-09-23 06:51:32.028966 | orchestrator | 06:51:32.028 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-23 06:51:32.028989 | orchestrator | 06:51:32.028 STDOUT terraform:  } 2025-09-23 06:51:32.029020 | orchestrator | 06:51:32.028 STDOUT terraform:  + binding (known after apply) 2025-09-23 06:51:32.029042 | orchestrator | 06:51:32.029 STDOUT terraform:  + fixed_ip { 2025-09-23 06:51:32.029073 | orchestrator | 06:51:32.029 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-09-23 06:51:32.029108 | orchestrator | 06:51:32.029 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-23 06:51:32.029129 | orchestrator | 06:51:32.029 STDOUT terraform:  } 2025-09-23 06:51:32.029149 | orchestrator | 06:51:32.029 STDOUT terraform:  } 2025-09-23 06:51:32.029203 | orchestrator | 06:51:32.029 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-09-23 06:51:32.029255 | orchestrator | 06:51:32.029 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-23 06:51:32.029298 | orchestrator | 06:51:32.029 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-23 06:51:32.029341 | orchestrator | 06:51:32.029 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-23 06:51:32.029384 | orchestrator | 06:51:32.029 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-23 06:51:32.029427 | orchestrator | 06:51:32.029 STDOUT terraform:  + all_tags = (known after apply) 2025-09-23 06:51:32.029470 | orchestrator | 06:51:32.029 STDOUT terraform:  + device_id = (known after apply) 2025-09-23 06:51:32.029517 | orchestrator | 06:51:32.029 STDOUT terraform:  + device_owner = (known after apply) 2025-09-23 06:51:32.029559 | orchestrator | 06:51:32.029 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-23 06:51:32.029601 | orchestrator | 06:51:32.029 STDOUT terraform:  + dns_name = (known after apply) 2025-09-23 06:51:32.029644 | orchestrator | 06:51:32.029 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.029702 | orchestrator | 06:51:32.029 STDOUT terraform:  + mac_address = (known after apply) 2025-09-23 06:51:32.029746 | orchestrator | 06:51:32.029 STDOUT terraform:  + network_id = (known after apply) 2025-09-23 06:51:32.029789 | orchestrator | 06:51:32.029 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-23 06:51:32.029832 | orchestrator | 06:51:32.029 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-23 06:51:32.029893 | orchestrator | 06:51:32.029 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.029957 | orchestrator | 06:51:32.029 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-23 06:51:32.030052 | orchestrator | 06:51:32.029 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-23 06:51:32.030091 | orchestrator | 06:51:32.030 STDOUT terraform:  + allowed_address_pairs { 2025-09-23 06:51:32.030139 | orchestrator | 06:51:32.030 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-23 06:51:32.030166 | orchestrator | 06:51:32.030 STDOUT terraform:  } 2025-09-23 06:51:32.030192 | orchestrator | 06:51:32.030 STDOUT terraform:  + allowed_address_pairs { 2025-09-23 06:51:32.030228 | orchestrator | 06:51:32.030 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-23 06:51:32.030249 | orchestrator | 06:51:32.030 STDOUT terraform:  } 2025-09-23 06:51:32.030280 | orchestrator | 06:51:32.030 STDOUT terraform:  + allowed_address_pairs { 2025-09-23 06:51:32.030315 | orchestrator | 06:51:32.030 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-23 06:51:32.030335 | orchestrator | 06:51:32.030 STDOUT terraform:  } 2025-09-23 06:51:32.030362 | orchestrator | 06:51:32.030 STDOUT terraform:  + allowed_address_pairs { 2025-09-23 06:51:32.030396 | orchestrator | 06:51:32.030 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-23 06:51:32.030426 | orchestrator | 06:51:32.030 STDOUT terraform:  } 2025-09-23 06:51:32.030462 | orchestrator | 06:51:32.030 STDOUT terraform:  + binding (known after apply) 2025-09-23 06:51:32.030484 | orchestrator | 06:51:32.030 STDOUT terraform:  + fixed_ip { 2025-09-23 06:51:32.030529 | orchestrator | 06:51:32.030 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-09-23 06:51:32.030583 | orchestrator | 06:51:32.030 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-23 06:51:32.038173 | orchestrator | 06:51:32.038 STDOUT terraform:  } 2025-09-23 06:51:32.038216 | orchestrator | 06:51:32.038 STDOUT terraform:  } 2025-09-23 06:51:32.038223 | orchestrator | 06:51:32.038 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-09-23 06:51:32.038250 | orchestrator | 06:51:32.038 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-23 06:51:32.038288 | orchestrator | 06:51:32.038 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-23 06:51:32.038325 | orchestrator | 06:51:32.038 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-23 06:51:32.038361 | orchestrator | 06:51:32.038 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-23 06:51:32.038401 | orchestrator | 06:51:32.038 STDOUT terraform:  + all_tags = (known after apply) 2025-09-23 06:51:32.038432 | orchestrator | 06:51:32.038 STDOUT terraform:  + device_id = (known after apply) 2025-09-23 06:51:32.038468 | orchestrator | 06:51:32.038 STDOUT terraform:  + device_owner = (known after apply) 2025-09-23 06:51:32.038502 | orchestrator | 06:51:32.038 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-23 06:51:32.038540 | orchestrator | 06:51:32.038 STDOUT terraform:  + dns_name = (known after apply) 2025-09-23 06:51:32.038576 | orchestrator | 06:51:32.038 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.038610 | orchestrator | 06:51:32.038 STDOUT terraform:  + mac_address = (known after apply) 2025-09-23 06:51:32.038645 | orchestrator | 06:51:32.038 STDOUT terraform:  + network_id = (known after apply) 2025-09-23 06:51:32.038691 | orchestrator | 06:51:32.038 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-23 06:51:32.038727 | orchestrator | 06:51:32.038 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-23 06:51:32.038762 | orchestrator | 06:51:32.038 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.038796 | orchestrator | 06:51:32.038 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-23 06:51:32.038832 | orchestrator | 06:51:32.038 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-23 06:51:32.038851 | orchestrator | 06:51:32.038 STDOUT terraform:  + allowed_address_pairs { 2025-09-23 06:51:32.038879 | orchestrator | 06:51:32.038 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-23 06:51:32.038886 | orchestrator | 06:51:32.038 STDOUT terraform:  } 2025-09-23 06:51:32.038910 | orchestrator | 06:51:32.038 STDOUT terraform:  + allowed_address_pairs { 2025-09-23 06:51:32.038940 | orchestrator | 06:51:32.038 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-23 06:51:32.038946 | orchestrator | 06:51:32.038 STDOUT terraform:  } 2025-09-23 06:51:32.038970 | orchestrator | 06:51:32.038 STDOUT terraform:  + allowed_address_pairs { 2025-09-23 06:51:32.038999 | orchestrator | 06:51:32.038 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-23 06:51:32.039005 | orchestrator | 06:51:32.038 STDOUT terraform:  } 2025-09-23 06:51:32.039028 | orchestrator | 06:51:32.039 STDOUT terraform:  + allowed_address_pairs { 2025-09-23 06:51:32.039056 | orchestrator | 06:51:32.039 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-23 06:51:32.039062 | orchestrator | 06:51:32.039 STDOUT terraform:  } 2025-09-23 06:51:32.039089 | orchestrator | 06:51:32.039 STDOUT terraform:  + binding (known after apply) 2025-09-23 06:51:32.039096 | orchestrator | 06:51:32.039 STDOUT terraform:  + fixed_ip { 2025-09-23 06:51:32.039123 | orchestrator | 06:51:32.039 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-09-23 06:51:32.039152 | orchestrator | 06:51:32.039 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-23 06:51:32.039158 | orchestrator | 06:51:32.039 STDOUT terraform:  } 2025-09-23 06:51:32.039173 | orchestrator | 06:51:32.039 STDOUT terraform:  } 2025-09-23 06:51:32.039221 | orchestrator | 06:51:32.039 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-09-23 06:51:32.039267 | orchestrator | 06:51:32.039 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-23 06:51:32.039301 | orchestrator | 06:51:32.039 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-23 06:51:32.039336 | orchestrator | 06:51:32.039 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-23 06:51:32.039370 | orchestrator | 06:51:32.039 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-23 06:51:32.039405 | orchestrator | 06:51:32.039 STDOUT terraform:  + all_tags = (known after apply) 2025-09-23 06:51:32.039441 | orchestrator | 06:51:32.039 STDOUT terraform:  + device_id = (known after apply) 2025-09-23 06:51:32.039475 | orchestrator | 06:51:32.039 STDOUT terraform:  + device_owner = (known after apply) 2025-09-23 06:51:32.039510 | orchestrator | 06:51:32.039 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-23 06:51:32.039546 | orchestrator | 06:51:32.039 STDOUT terraform:  + dns_name = (known after apply) 2025-09-23 06:51:32.039582 | orchestrator | 06:51:32.039 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.039616 | orchestrator | 06:51:32.039 STDOUT terraform:  + mac_address = (known after apply) 2025-09-23 06:51:32.039651 | orchestrator | 06:51:32.039 STDOUT terraform:  + network_id = (known after apply) 2025-09-23 06:51:32.039697 | orchestrator | 06:51:32.039 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-23 06:51:32.039732 | orchestrator | 06:51:32.039 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-23 06:51:32.039768 | orchestrator | 06:51:32.039 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.039803 | orchestrator | 06:51:32.039 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-23 06:51:32.039838 | orchestrator | 06:51:32.039 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-23 06:51:32.039857 | orchestrator | 06:51:32.039 STDOUT terraform:  + allowed_address_pairs { 2025-09-23 06:51:32.039885 | orchestrator | 06:51:32.039 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-23 06:51:32.039892 | orchestrator | 06:51:32.039 STDOUT terraform:  } 2025-09-23 06:51:32.039912 | orchestrator | 06:51:32.039 STDOUT terraform:  + allowed_address_pairs { 2025-09-23 06:51:32.039969 | orchestrator | 06:51:32.039 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-23 06:51:32.039975 | orchestrator | 06:51:32.039 STDOUT terraform:  } 2025-09-23 06:51:32.040100 | orchestrator | 06:51:32.039 STDOUT terraform:  + allowed_address_pairs { 2025-09-23 06:51:32.040167 | orchestrator | 06:51:32.040 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-23 06:51:32.040191 | orchestrator | 06:51:32.040 STDOUT terraform:  } 2025-09-23 06:51:32.040223 | orchestrator | 06:51:32.040 STDOUT terraform:  + allowed_address_pairs { 2025-09-23 06:51:32.040269 | orchestrator | 06:51:32.040 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-23 06:51:32.040290 | orchestrator | 06:51:32.040 STDOUT terraform:  } 2025-09-23 06:51:32.040325 | orchestrator | 06:51:32.040 STDOUT terraform:  + binding (known after apply) 2025-09-23 06:51:32.040347 | orchestrator | 06:51:32.040 STDOUT terraform:  + fixed_ip { 2025-09-23 06:51:32.040390 | orchestrator | 06:51:32.040 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-09-23 06:51:32.040439 | orchestrator | 06:51:32.040 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-23 06:51:32.040458 | orchestrator | 06:51:32.040 STDOUT terraform:  } 2025-09-23 06:51:32.040473 | orchestrator | 06:51:32.040 STDOUT terraform:  } 2025-09-23 06:51:32.040533 | orchestrator | 06:51:32.040 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-09-23 06:51:32.040578 | orchestrator | 06:51:32.040 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-23 06:51:32.040616 | orchestrator | 06:51:32.040 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-23 06:51:32.040697 | orchestrator | 06:51:32.040 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-23 06:51:32.040736 | orchestrator | 06:51:32.040 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-23 06:51:32.040800 | orchestrator | 06:51:32.040 STDOUT terraform:  + all_tags = (known after apply) 2025-09-23 06:51:32.040859 | orchestrator | 06:51:32.040 STDOUT terraform:  + device_id = (known after apply) 2025-09-23 06:51:32.040922 | orchestrator | 06:51:32.040 STDOUT terraform:  + device_owner = (known after apply) 2025-09-23 06:51:32.040970 | orchestrator | 06:51:32.040 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-23 06:51:32.041004 | orchestrator | 06:51:32.040 STDOUT terraform:  + dns_name = (known after apply) 2025-09-23 06:51:32.041043 | orchestrator | 06:51:32.041 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.041075 | orchestrator | 06:51:32.041 STDOUT terraform:  + mac_address = (known after apply) 2025-09-23 06:51:32.041126 | orchestrator | 06:51:32.041 STDOUT terraform:  + network_id = (known after apply) 2025-09-23 06:51:32.041181 | orchestrator | 06:51:32.041 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-23 06:51:32.041219 | orchestrator | 06:51:32.041 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-23 06:51:32.041254 | orchestrator | 06:51:32.041 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.041290 | orchestrator | 06:51:32.041 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-23 06:51:32.041327 | orchestrator | 06:51:32.041 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-23 06:51:32.041348 | orchestrator | 06:51:32.041 STDOUT terraform:  + allowed_address_pairs { 2025-09-23 06:51:32.041376 | orchestrator | 06:51:32.041 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-23 06:51:32.041391 | orchestrator | 06:51:32.041 STDOUT terraform:  } 2025-09-23 06:51:32.041404 | orchestrator | 06:51:32.041 STDOUT terraform:  + allowed_address_pairs { 2025-09-23 06:51:32.041434 | orchestrator | 06:51:32.041 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-23 06:51:32.041449 | orchestrator | 06:51:32.041 STDOUT terraform:  } 2025-09-23 06:51:32.041469 | orchestrator | 06:51:32.041 STDOUT terraform:  + allowed_address_pairs { 2025-09-23 06:51:32.041496 | orchestrator | 06:51:32.041 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-23 06:51:32.041503 | orchestrator | 06:51:32.041 STDOUT terraform:  } 2025-09-23 06:51:32.041525 | orchestrator | 06:51:32.041 STDOUT terraform:  + allowed_address_pairs { 2025-09-23 06:51:32.041551 | orchestrator | 06:51:32.041 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-23 06:51:32.041558 | orchestrator | 06:51:32.041 STDOUT terraform:  } 2025-09-23 06:51:32.041583 | orchestrator | 06:51:32.041 STDOUT terraform:  + binding (known after apply) 2025-09-23 06:51:32.041599 | orchestrator | 06:51:32.041 STDOUT terraform:  + fixed_ip { 2025-09-23 06:51:32.041622 | orchestrator | 06:51:32.041 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-09-23 06:51:32.041650 | orchestrator | 06:51:32.041 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-23 06:51:32.041656 | orchestrator | 06:51:32.041 STDOUT terraform:  } 2025-09-23 06:51:32.041754 | orchestrator | 06:51:32.041 STDOUT terraform:  } 2025-09-23 06:51:32.041807 | orchestrator | 06:51:32.041 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-09-23 06:51:32.041852 | orchestrator | 06:51:32.041 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-23 06:51:32.041890 | orchestrator | 06:51:32.041 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-23 06:51:32.041927 | orchestrator | 06:51:32.041 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-23 06:51:32.041961 | orchestrator | 06:51:32.041 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-23 06:51:32.041996 | orchestrator | 06:51:32.041 STDOUT terraform:  + all_tags = (known after apply) 2025-09-23 06:51:32.042059 | orchestrator | 06:51:32.041 STDOUT terraform:  + device_id = (known after apply) 2025-09-23 06:51:32.042099 | orchestrator | 06:51:32.042 STDOUT terraform:  + device_owner = (known after apply) 2025-09-23 06:51:32.042133 | orchestrator | 06:51:32.042 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-23 06:51:32.042170 | orchestrator | 06:51:32.042 STDOUT terraform:  + dns_name = (known after apply) 2025-09-23 06:51:32.042205 | orchestrator | 06:51:32.042 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.042240 | orchestrator | 06:51:32.042 STDOUT terraform:  + mac_address = (known after apply) 2025-09-23 06:51:32.042276 | orchestrator | 06:51:32.042 STDOUT terraform:  + network_id = (known after apply) 2025-09-23 06:51:32.042317 | orchestrator | 06:51:32.042 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-23 06:51:32.042347 | orchestrator | 06:51:32.042 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-23 06:51:32.042382 | orchestrator | 06:51:32.042 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.042416 | orchestrator | 06:51:32.042 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-23 06:51:32.042450 | orchestrator | 06:51:32.042 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-23 06:51:32.042471 | orchestrator | 06:51:32.042 STDOUT terraform:  + allowed_address_pairs { 2025-09-23 06:51:32.042496 | orchestrator | 06:51:32.042 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-23 06:51:32.042503 | orchestrator | 06:51:32.042 STDOUT terraform:  } 2025-09-23 06:51:32.042524 | orchestrator | 06:51:32.042 STDOUT terraform:  + allowed_address_pairs { 2025-09-23 06:51:32.042554 | orchestrator | 06:51:32.042 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-23 06:51:32.042568 | orchestrator | 06:51:32.042 STDOUT terraform:  } 2025-09-23 06:51:32.042589 | orchestrator | 06:51:32.042 STDOUT terraform:  + allowed_address_pairs { 2025-09-23 06:51:32.042616 | orchestrator | 06:51:32.042 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-23 06:51:32.042623 | orchestrator | 06:51:32.042 STDOUT terraform:  } 2025-09-23 06:51:32.042644 | orchestrator | 06:51:32.042 STDOUT terraform:  + allowed_address_pairs { 2025-09-23 06:51:32.042701 | orchestrator | 06:51:32.042 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-23 06:51:32.042707 | orchestrator | 06:51:32.042 STDOUT terraform:  } 2025-09-23 06:51:32.042712 | orchestrator | 06:51:32.042 STDOUT terraform:  + binding (known after apply) 2025-09-23 06:51:32.042717 | orchestrator | 06:51:32.042 STDOUT terraform:  + fixed_ip { 2025-09-23 06:51:32.042744 | orchestrator | 06:51:32.042 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-09-23 06:51:32.042776 | orchestrator | 06:51:32.042 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-23 06:51:32.042783 | orchestrator | 06:51:32.042 STDOUT terraform:  } 2025-09-23 06:51:32.042804 | orchestrator | 06:51:32.042 STDOUT terraform:  } 2025-09-23 06:51:32.042844 | orchestrator | 06:51:32.042 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-09-23 06:51:32.042889 | orchestrator | 06:51:32.042 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-23 06:51:32.042924 | orchestrator | 06:51:32.042 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-23 06:51:32.042959 | orchestrator | 06:51:32.042 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-23 06:51:32.042993 | orchestrator | 06:51:32.042 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-23 06:51:32.043026 | orchestrator | 06:51:32.042 STDOUT terraform:  + all_tags = (known after apply) 2025-09-23 06:51:32.043082 | orchestrator | 06:51:32.043 STDOUT terraform:  + device_id = (known after apply) 2025-09-23 06:51:32.043115 | orchestrator | 06:51:32.043 STDOUT terraform:  + device_owner = (known after apply) 2025-09-23 06:51:32.043154 | orchestrator | 06:51:32.043 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-23 06:51:32.043189 | orchestrator | 06:51:32.043 STDOUT terraform:  + dns_name = (known after apply) 2025-09-23 06:51:32.043223 | orchestrator | 06:51:32.043 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.043257 | orchestrator | 06:51:32.043 STDOUT terraform:  + mac_address = (known after apply) 2025-09-23 06:51:32.043292 | orchestrator | 06:51:32.043 STDOUT terraform:  + network_id = (known after apply) 2025-09-23 06:51:32.043325 | orchestrator | 06:51:32.043 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-23 06:51:32.043359 | orchestrator | 06:51:32.043 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-23 06:51:32.043395 | orchestrator | 06:51:32.043 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.043430 | orchestrator | 06:51:32.043 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-23 06:51:32.043464 | orchestrator | 06:51:32.043 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-23 06:51:32.043484 | orchestrator | 06:51:32.043 STDOUT terraform:  + allowed_address_pairs { 2025-09-23 06:51:32.043512 | orchestrator | 06:51:32.043 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-23 06:51:32.043518 | orchestrator | 06:51:32.043 STDOUT terraform:  } 2025-09-23 06:51:32.043540 | orchestrator | 06:51:32.043 STDOUT terraform:  + allowed_address_pairs { 2025-09-23 06:51:32.043567 | orchestrator | 06:51:32.043 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-23 06:51:32.043574 | orchestrator | 06:51:32.043 STDOUT terraform:  } 2025-09-23 06:51:32.043596 | orchestrator | 06:51:32.043 STDOUT terraform:  + allowed_address_pairs { 2025-09-23 06:51:32.043622 | orchestrator | 06:51:32.043 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-23 06:51:32.043628 | orchestrator | 06:51:32.043 STDOUT terraform:  } 2025-09-23 06:51:32.043650 | orchestrator | 06:51:32.043 STDOUT terraform:  + allowed_address_pairs { 2025-09-23 06:51:32.043688 | orchestrator | 06:51:32.043 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-23 06:51:32.043703 | orchestrator | 06:51:32.043 STDOUT terraform:  } 2025-09-23 06:51:32.043725 | orchestrator | 06:51:32.043 STDOUT terraform:  + binding (known after apply) 2025-09-23 06:51:32.043739 | orchestrator | 06:51:32.043 STDOUT terraform:  + fixed_ip { 2025-09-23 06:51:32.043763 | orchestrator | 06:51:32.043 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-09-23 06:51:32.043791 | orchestrator | 06:51:32.043 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-23 06:51:32.043797 | orchestrator | 06:51:32.043 STDOUT terraform:  } 2025-09-23 06:51:32.043814 | orchestrator | 06:51:32.043 STDOUT terraform:  } 2025-09-23 06:51:32.043861 | orchestrator | 06:51:32.043 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-09-23 06:51:32.043909 | orchestrator | 06:51:32.043 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-09-23 06:51:32.043930 | orchestrator | 06:51:32.043 STDOUT terraform:  + force_destroy = false 2025-09-23 06:51:32.043965 | orchestrator | 06:51:32.043 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.043993 | orchestrator | 06:51:32.043 STDOUT terraform:  + port_id = (known after apply) 2025-09-23 06:51:32.044020 | orchestrator | 06:51:32.043 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.044048 | orchestrator | 06:51:32.044 STDOUT terraform:  + router_id = (known after apply) 2025-09-23 06:51:32.044076 | orchestrator | 06:51:32.044 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-23 06:51:32.044082 | orchestrator | 06:51:32.044 STDOUT terraform:  } 2025-09-23 06:51:32.044120 | orchestrator | 06:51:32.044 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-09-23 06:51:32.044155 | orchestrator | 06:51:32.044 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-09-23 06:51:32.044191 | orchestrator | 06:51:32.044 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-23 06:51:32.044227 | orchestrator | 06:51:32.044 STDOUT terraform:  + all_tags = (known after apply) 2025-09-23 06:51:32.044249 | orchestrator | 06:51:32.044 STDOUT terraform:  + availability_zone_hints = [ 2025-09-23 06:51:32.044265 | orchestrator | 06:51:32.044 STDOUT terraform:  + "nova", 2025-09-23 06:51:32.044272 | orchestrator | 06:51:32.044 STDOUT terraform:  ] 2025-09-23 06:51:32.044307 | orchestrator | 06:51:32.044 STDOUT terraform:  + distributed = (known after apply) 2025-09-23 06:51:32.044341 | orchestrator | 06:51:32.044 STDOUT terraform:  + enable_snat = (known after apply) 2025-09-23 06:51:32.045131 | orchestrator | 06:51:32.044 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-09-23 06:51:32.045193 | orchestrator | 06:51:32.045 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-09-23 06:51:32.045221 | orchestrator | 06:51:32.045 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.045255 | orchestrator | 06:51:32.045 STDOUT terraform:  + name = "testbed" 2025-09-23 06:51:32.045294 | orchestrator | 06:51:32.045 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.045329 | orchestrator | 06:51:32.045 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-23 06:51:32.045359 | orchestrator | 06:51:32.045 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-09-23 06:51:32.045365 | orchestrator | 06:51:32.045 STDOUT terraform:  } 2025-09-23 06:51:32.045422 | orchestrator | 06:51:32.045 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-09-23 06:51:32.045477 | orchestrator | 06:51:32.045 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-09-23 06:51:32.045501 | orchestrator | 06:51:32.045 STDOUT terraform:  + description = "ssh" 2025-09-23 06:51:32.045529 | orchestrator | 06:51:32.045 STDOUT terraform:  + direction = "ingress" 2025-09-23 06:51:32.045553 | orchestrator | 06:51:32.045 STDOUT terraform:  + ethertype = "IPv4" 2025-09-23 06:51:32.045590 | orchestrator | 06:51:32.045 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.045613 | orchestrator | 06:51:32.045 STDOUT terraform:  + port_range_max = 22 2025-09-23 06:51:32.045635 | orchestrator | 06:51:32.045 STDOUT terraform:  + port_range_min = 22 2025-09-23 06:51:32.045695 | orchestrator | 06:51:32.045 STDOUT terraform:  + protocol = "tcp" 2025-09-23 06:51:32.045713 | orchestrator | 06:51:32.045 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.045748 | orchestrator | 06:51:32.045 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-23 06:51:32.045785 | orchestrator | 06:51:32.045 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-23 06:51:32.045811 | orchestrator | 06:51:32.045 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-23 06:51:32.045846 | orchestrator | 06:51:32.045 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-23 06:51:32.045884 | orchestrator | 06:51:32.045 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-23 06:51:32.045890 | orchestrator | 06:51:32.045 STDOUT terraform:  } 2025-09-23 06:51:32.045944 | orchestrator | 06:51:32.045 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-09-23 06:51:32.045994 | orchestrator | 06:51:32.045 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-09-23 06:51:32.046043 | orchestrator | 06:51:32.045 STDOUT terraform:  + description = "wireguard" 2025-09-23 06:51:32.046083 | orchestrator | 06:51:32.046 STDOUT terraform:  + direction = "ingress" 2025-09-23 06:51:32.046107 | orchestrator | 06:51:32.046 STDOUT terraform:  + ethertype = "IPv4" 2025-09-23 06:51:32.046143 | orchestrator | 06:51:32.046 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.046169 | orchestrator | 06:51:32.046 STDOUT terraform:  + port_range_max = 51820 2025-09-23 06:51:32.046196 | orchestrator | 06:51:32.046 STDOUT terraform:  + port_range_min = 51820 2025-09-23 06:51:32.046219 | orchestrator | 06:51:32.046 STDOUT terraform:  + protocol = "udp" 2025-09-23 06:51:32.046258 | orchestrator | 06:51:32.046 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.046291 | orchestrator | 06:51:32.046 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-23 06:51:32.046326 | orchestrator | 06:51:32.046 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-23 06:51:32.046356 | orchestrator | 06:51:32.046 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-23 06:51:32.046393 | orchestrator | 06:51:32.046 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-23 06:51:32.046431 | orchestrator | 06:51:32.046 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-23 06:51:32.046437 | orchestrator | 06:51:32.046 STDOUT terraform:  } 2025-09-23 06:51:32.046491 | orchestrator | 06:51:32.046 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-09-23 06:51:32.046542 | orchestrator | 06:51:32.046 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-09-23 06:51:32.046569 | orchestrator | 06:51:32.046 STDOUT terraform:  + direction = "ingress" 2025-09-23 06:51:32.046594 | orchestrator | 06:51:32.046 STDOUT terraform:  + ethertype = "IPv4" 2025-09-23 06:51:32.046630 | orchestrator | 06:51:32.046 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.046654 | orchestrator | 06:51:32.046 STDOUT terraform:  + protocol = "tcp" 2025-09-23 06:51:32.046718 | orchestrator | 06:51:32.046 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.046758 | orchestrator | 06:51:32.046 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-23 06:51:32.046791 | orchestrator | 06:51:32.046 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-23 06:51:32.046823 | orchestrator | 06:51:32.046 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-23 06:51:32.046864 | orchestrator | 06:51:32.046 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-23 06:51:32.046899 | orchestrator | 06:51:32.046 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-23 06:51:32.046906 | orchestrator | 06:51:32.046 STDOUT terraform:  } 2025-09-23 06:51:32.046957 | orchestrator | 06:51:32.046 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-09-23 06:51:32.047008 | orchestrator | 06:51:32.046 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-09-23 06:51:32.047038 | orchestrator | 06:51:32.047 STDOUT terraform:  + direction = "ingress" 2025-09-23 06:51:32.047065 | orchestrator | 06:51:32.047 STDOUT terraform:  + ethertype = "IPv4" 2025-09-23 06:51:32.047097 | orchestrator | 06:51:32.047 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.047121 | orchestrator | 06:51:32.047 STDOUT terraform:  + protocol = "udp" 2025-09-23 06:51:32.047159 | orchestrator | 06:51:32.047 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.047193 | orchestrator | 06:51:32.047 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-23 06:51:32.047228 | orchestrator | 06:51:32.047 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-23 06:51:32.047261 | orchestrator | 06:51:32.047 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-23 06:51:32.047295 | orchestrator | 06:51:32.047 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-23 06:51:32.047331 | orchestrator | 06:51:32.047 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-23 06:51:32.047338 | orchestrator | 06:51:32.047 STDOUT terraform:  } 2025-09-23 06:51:32.047393 | orchestrator | 06:51:32.047 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-09-23 06:51:32.047444 | orchestrator | 06:51:32.047 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-09-23 06:51:32.047472 | orchestrator | 06:51:32.047 STDOUT terraform:  + direction = "ingress" 2025-09-23 06:51:32.047498 | orchestrator | 06:51:32.047 STDOUT terraform:  + ethertype = "IPv4" 2025-09-23 06:51:32.047532 | orchestrator | 06:51:32.047 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.047557 | orchestrator | 06:51:32.047 STDOUT terraform:  + protocol = "icmp" 2025-09-23 06:51:32.047597 | orchestrator | 06:51:32.047 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.047627 | orchestrator | 06:51:32.047 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-23 06:51:32.047674 | orchestrator | 06:51:32.047 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-23 06:51:32.047702 | orchestrator | 06:51:32.047 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-23 06:51:32.047736 | orchestrator | 06:51:32.047 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-23 06:51:32.047771 | orchestrator | 06:51:32.047 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-23 06:51:32.047777 | orchestrator | 06:51:32.047 STDOUT terraform:  } 2025-09-23 06:51:32.047829 | orchestrator | 06:51:32.047 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-09-23 06:51:32.047879 | orchestrator | 06:51:32.047 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-09-23 06:51:32.047907 | orchestrator | 06:51:32.047 STDOUT terraform:  + direction = "ingress" 2025-09-23 06:51:32.047931 | orchestrator | 06:51:32.047 STDOUT terraform:  + ethertype = "IPv4" 2025-09-23 06:51:32.047967 | orchestrator | 06:51:32.047 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.047990 | orchestrator | 06:51:32.047 STDOUT terraform:  + protocol = "tcp" 2025-09-23 06:51:32.048026 | orchestrator | 06:51:32.047 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.048060 | orchestrator | 06:51:32.048 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-23 06:51:32.048095 | orchestrator | 06:51:32.048 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-23 06:51:32.048124 | orchestrator | 06:51:32.048 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-23 06:51:32.048158 | orchestrator | 06:51:32.048 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-23 06:51:32.048194 | orchestrator | 06:51:32.048 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-23 06:51:32.048200 | orchestrator | 06:51:32.048 STDOUT terraform:  } 2025-09-23 06:51:32.048253 | orchestrator | 06:51:32.048 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-09-23 06:51:32.048301 | orchestrator | 06:51:32.048 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-09-23 06:51:32.048328 | orchestrator | 06:51:32.048 STDOUT terraform:  + direction = "ingress" 2025-09-23 06:51:32.048353 | orchestrator | 06:51:32.048 STDOUT terraform:  + ethertype = "IPv4" 2025-09-23 06:51:32.048389 | orchestrator | 06:51:32.048 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.048412 | orchestrator | 06:51:32.048 STDOUT terraform:  + protocol = "udp" 2025-09-23 06:51:32.048447 | orchestrator | 06:51:32.048 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.048481 | orchestrator | 06:51:32.048 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-23 06:51:32.048516 | orchestrator | 06:51:32.048 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-23 06:51:32.048546 | orchestrator | 06:51:32.048 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-23 06:51:32.048578 | orchestrator | 06:51:32.048 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-23 06:51:32.048614 | orchestrator | 06:51:32.048 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-23 06:51:32.048620 | orchestrator | 06:51:32.048 STDOUT terraform:  } 2025-09-23 06:51:32.048696 | orchestrator | 06:51:32.048 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-09-23 06:51:32.048726 | orchestrator | 06:51:32.048 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-09-23 06:51:32.048761 | orchestrator | 06:51:32.048 STDOUT terraform:  + direction = "ingress" 2025-09-23 06:51:32.048790 | orchestrator | 06:51:32.048 STDOUT terraform:  + ethertype = "IPv4" 2025-09-23 06:51:32.048828 | orchestrator | 06:51:32.048 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.048851 | orchestrator | 06:51:32.048 STDOUT terraform:  + protocol = "icmp" 2025-09-23 06:51:32.048887 | orchestrator | 06:51:32.048 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.048921 | orchestrator | 06:51:32.048 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-23 06:51:32.048956 | orchestrator | 06:51:32.048 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-23 06:51:32.048983 | orchestrator | 06:51:32.048 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-23 06:51:32.049018 | orchestrator | 06:51:32.048 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-23 06:51:32.049054 | orchestrator | 06:51:32.049 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-23 06:51:32.049060 | orchestrator | 06:51:32.049 STDOUT terraform:  } 2025-09-23 06:51:32.049110 | orchestrator | 06:51:32.049 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-09-23 06:51:32.049157 | orchestrator | 06:51:32.049 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-09-23 06:51:32.049181 | orchestrator | 06:51:32.049 STDOUT terraform:  + description = "vrrp" 2025-09-23 06:51:32.049209 | orchestrator | 06:51:32.049 STDOUT terraform:  + direction = "ingress" 2025-09-23 06:51:32.049234 | orchestrator | 06:51:32.049 STDOUT terraform:  + ethertype = "IPv4" 2025-09-23 06:51:32.049293 | orchestrator | 06:51:32.049 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.049300 | orchestrator | 06:51:32.049 STDOUT terraform:  + protocol = "112" 2025-09-23 06:51:32.049328 | orchestrator | 06:51:32.049 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.049362 | orchestrator | 06:51:32.049 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-23 06:51:32.049397 | orchestrator | 06:51:32.049 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-23 06:51:32.049425 | orchestrator | 06:51:32.049 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-23 06:51:32.049459 | orchestrator | 06:51:32.049 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-23 06:51:32.049494 | orchestrator | 06:51:32.049 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-23 06:51:32.049500 | orchestrator | 06:51:32.049 STDOUT terraform:  } 2025-09-23 06:51:32.049550 | orchestrator | 06:51:32.049 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-09-23 06:51:32.049597 | orchestrator | 06:51:32.049 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-09-23 06:51:32.049624 | orchestrator | 06:51:32.049 STDOUT terraform:  + all_tags = (known after apply) 2025-09-23 06:51:32.049656 | orchestrator | 06:51:32.049 STDOUT terraform:  + description = "management security group" 2025-09-23 06:51:32.049705 | orchestrator | 06:51:32.049 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.049732 | orchestrator | 06:51:32.049 STDOUT terraform:  + name = "testbed-management" 2025-09-23 06:51:32.049759 | orchestrator | 06:51:32.049 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.049785 | orchestrator | 06:51:32.049 STDOUT terraform:  + stateful = (known after apply) 2025-09-23 06:51:32.049812 | orchestrator | 06:51:32.049 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-23 06:51:32.049818 | orchestrator | 06:51:32.049 STDOUT terraform:  } 2025-09-23 06:51:32.049865 | orchestrator | 06:51:32.049 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-09-23 06:51:32.049921 | orchestrator | 06:51:32.049 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-09-23 06:51:32.049951 | orchestrator | 06:51:32.049 STDOUT terraform:  + all_tags = (known after apply) 2025-09-23 06:51:32.049977 | orchestrator | 06:51:32.049 STDOUT terraform:  + description = "node security group" 2025-09-23 06:51:32.050004 | orchestrator | 06:51:32.049 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.050097 | orchestrator | 06:51:32.049 STDOUT terraform:  + name = "testbed-node" 2025-09-23 06:51:32.050184 | orchestrator | 06:51:32.050 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.050470 | orchestrator | 06:51:32.050 STDOUT terraform:  + stateful = (known after apply) 2025-09-23 06:51:32.050749 | orchestrator | 06:51:32.050 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-23 06:51:32.050883 | orchestrator | 06:51:32.050 STDOUT terraform:  } 2025-09-23 06:51:32.051370 | orchestrator | 06:51:32.050 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-09-23 06:51:32.051798 | orchestrator | 06:51:32.051 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-09-23 06:51:32.052118 | orchestrator | 06:51:32.051 STDOUT terraform:  + all_tags = (known after apply) 2025-09-23 06:51:32.052407 | orchestrator | 06:51:32.052 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-09-23 06:51:32.052638 | orchestrator | 06:51:32.052 STDOUT terraform:  + dns_nameservers = [ 2025-09-23 06:51:32.052714 | orchestrator | 06:51:32.052 STDOUT terraform:  + "8.8.8.8", 2025-09-23 06:51:32.052955 | orchestrator | 06:51:32.052 STDOUT terraform:  + "9.9.9.9", 2025-09-23 06:51:32.053147 | orchestrator | 06:51:32.052 STDOUT terraform:  ] 2025-09-23 06:51:32.053321 | orchestrator | 06:51:32.053 STDOUT terraform:  + enable_dhcp = true 2025-09-23 06:51:32.053650 | orchestrator | 06:51:32.053 STDOUT terraform:  + gateway_ip = (known after apply) 2025-09-23 06:51:32.054056 | orchestrator | 06:51:32.053 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.054353 | orchestrator | 06:51:32.054 STDOUT terraform:  + ip_version = 4 2025-09-23 06:51:32.054719 | orchestrator | 06:51:32.054 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-09-23 06:51:32.055066 | orchestrator | 06:51:32.054 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-09-23 06:51:32.055442 | orchestrator | 06:51:32.055 STDOUT terraform:  + name = "subnet-testbed-management" 2025-09-23 06:51:32.055742 | orchestrator | 06:51:32.055 STDOUT terraform:  + network_id = (known after apply) 2025-09-23 06:51:32.055969 | orchestrator | 06:51:32.055 STDOUT terraform:  + no_gateway = false 2025-09-23 06:51:32.056330 | orchestrator | 06:51:32.055 STDOUT terraform:  + region = (known after apply) 2025-09-23 06:51:32.056716 | orchestrator | 06:51:32.056 STDOUT terraform:  + service_types = (known after apply) 2025-09-23 06:51:32.057088 | orchestrator | 06:51:32.056 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-23 06:51:32.057286 | orchestrator | 06:51:32.057 STDOUT terraform:  + allocation_pool { 2025-09-23 06:51:32.057545 | orchestrator | 06:51:32.057 STDOUT terraform:  + end = "192.168.31.250" 2025-09-23 06:51:32.057813 | orchestrator | 06:51:32.057 STDOUT terraform:  + start = "192.168.31.200 2025-09-23 06:51:32.058268 | orchestrator | 06:51:32.058 STDOUT terraform: " 2025-09-23 06:51:32.058424 | orchestrator | 06:51:32.058 STDOUT terraform:  } 2025-09-23 06:51:32.058497 | orchestrator | 06:51:32.058 STDOUT terraform:  } 2025-09-23 06:51:32.058610 | orchestrator | 06:51:32.058 STDOUT terraform:  # terraform_data.image will be created 2025-09-23 06:51:32.058786 | orchestrator | 06:51:32.058 STDOUT terraform:  + resource "terraform_data" "image" { 2025-09-23 06:51:32.058989 | orchestrator | 06:51:32.058 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.059139 | orchestrator | 06:51:32.059 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-23 06:51:32.059378 | orchestrator | 06:51:32.059 STDOUT terraform:  + output = (known after apply) 2025-09-23 06:51:32.059538 | orchestrator | 06:51:32.059 STDOUT terraform:  } 2025-09-23 06:51:32.059774 | orchestrator | 06:51:32.059 STDOUT terraform:  # terraform_data.image_node will be created 2025-09-23 06:51:32.060121 | orchestrator | 06:51:32.059 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-09-23 06:51:32.060468 | orchestrator | 06:51:32.060 STDOUT terraform:  + id = (known after apply) 2025-09-23 06:51:32.060551 | orchestrator | 06:51:32.060 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-23 06:51:32.060802 | orchestrator | 06:51:32.060 STDOUT terraform:  + output = (known after apply) 2025-09-23 06:51:32.060925 | orchestrator | 06:51:32.060 STDOUT terraform:  } 2025-09-23 06:51:32.061167 | orchestrator | 06:51:32.060 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-09-23 06:51:32.061311 | orchestrator | 06:51:32.061 STDOUT terraform: Changes to Outputs: 2025-09-23 06:51:32.061493 | orchestrator | 06:51:32.061 STDOUT terraform:  + manager_address = (sensitive value) 2025-09-23 06:51:32.061732 | orchestrator | 06:51:32.061 STDOUT terraform:  + private_key = (sensitive value) 2025-09-23 06:51:32.233207 | orchestrator | 06:51:32.233 STDOUT terraform: terraform_data.image_node: Creating... 2025-09-23 06:51:32.233274 | orchestrator | 06:51:32.233 STDOUT terraform: terraform_data.image: Creating... 2025-09-23 06:51:32.233283 | orchestrator | 06:51:32.233 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=ac30efce-adbf-0ab5-f999-df05163d4c2e] 2025-09-23 06:51:32.233291 | orchestrator | 06:51:32.233 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=625ad7a2-1cfc-1bf4-fb88-302ae1ef1adb] 2025-09-23 06:51:32.241487 | orchestrator | 06:51:32.241 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-09-23 06:51:32.241531 | orchestrator | 06:51:32.241 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-09-23 06:51:32.244563 | orchestrator | 06:51:32.244 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-09-23 06:51:32.251016 | orchestrator | 06:51:32.250 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-09-23 06:51:32.252207 | orchestrator | 06:51:32.252 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-09-23 06:51:32.254583 | orchestrator | 06:51:32.253 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-09-23 06:51:32.262182 | orchestrator | 06:51:32.261 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-09-23 06:51:32.262712 | orchestrator | 06:51:32.262 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-09-23 06:51:32.276834 | orchestrator | 06:51:32.274 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-09-23 06:51:32.279183 | orchestrator | 06:51:32.278 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-09-23 06:51:32.692627 | orchestrator | 06:51:32.691 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-23 06:51:32.697778 | orchestrator | 06:51:32.697 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-23 06:51:32.701507 | orchestrator | 06:51:32.701 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-09-23 06:51:32.703326 | orchestrator | 06:51:32.703 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-09-23 06:51:32.729540 | orchestrator | 06:51:32.729 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-09-23 06:51:32.737237 | orchestrator | 06:51:32.737 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-09-23 06:51:33.269590 | orchestrator | 06:51:33.269 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=e01e23de-2013-4a70-b81f-bb5ffd072e8c] 2025-09-23 06:51:33.281326 | orchestrator | 06:51:33.281 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-09-23 06:51:35.867460 | orchestrator | 06:51:35.867 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=fd6a0863-0d42-4019-9e23-eb994da62dbd] 2025-09-23 06:51:35.878144 | orchestrator | 06:51:35.877 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-09-23 06:51:35.883533 | orchestrator | 06:51:35.883 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=59088487-bcaf-4b18-9006-b2b85c395676] 2025-09-23 06:51:35.896196 | orchestrator | 06:51:35.895 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-09-23 06:51:35.909840 | orchestrator | 06:51:35.909 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=c2ff2f17-feac-486a-a8d3-f5343e47e8fb] 2025-09-23 06:51:35.917697 | orchestrator | 06:51:35.917 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=c90ab8a7-6741-4b53-9264-08db4b9d41dd] 2025-09-23 06:51:35.918372 | orchestrator | 06:51:35.918 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-09-23 06:51:35.919957 | orchestrator | 06:51:35.919 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=5c88e186-44c4-4f29-a716-3e862e71c173] 2025-09-23 06:51:35.924582 | orchestrator | 06:51:35.924 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-09-23 06:51:35.932643 | orchestrator | 06:51:35.932 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-09-23 06:51:35.946632 | orchestrator | 06:51:35.946 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=87ebb364-ac90-40d8-a46a-ebfab3ab7b91] 2025-09-23 06:51:35.953397 | orchestrator | 06:51:35.953 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-09-23 06:51:35.994713 | orchestrator | 06:51:35.994 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=0bff4510-9eaf-4f53-bf1a-5cee4a2246ec] 2025-09-23 06:51:36.007597 | orchestrator | 06:51:36.007 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-09-23 06:51:36.012573 | orchestrator | 06:51:36.012 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=049f49abe61162abe15e76abe8886a11730c7b16] 2025-09-23 06:51:36.016834 | orchestrator | 06:51:36.016 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=7c71f819-4704-4446-9599-7b21db8e3013] 2025-09-23 06:51:36.018747 | orchestrator | 06:51:36.018 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=b75d5c1f-0301-4e14-8d60-793226b090b6] 2025-09-23 06:51:36.020709 | orchestrator | 06:51:36.020 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-09-23 06:51:36.023154 | orchestrator | 06:51:36.023 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-09-23 06:51:36.023705 | orchestrator | 06:51:36.023 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=dcf30ce75bd7de9a6cb932604fb44b777e19e4bf] 2025-09-23 06:51:36.651930 | orchestrator | 06:51:36.651 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=c666169e-f4a3-4a18-863e-3a2fdc794692] 2025-09-23 06:51:36.928013 | orchestrator | 06:51:36.927 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=eb4f7d00-1608-44b1-a40f-64b6cf7b70c1] 2025-09-23 06:51:36.934464 | orchestrator | 06:51:36.934 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-09-23 06:51:39.276107 | orchestrator | 06:51:39.275 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=2db81c41-a192-4c8d-88cd-7bf1813310e7] 2025-09-23 06:51:39.299153 | orchestrator | 06:51:39.298 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=aab1398b-33d3-432d-85d3-6da114cbf6bf] 2025-09-23 06:51:39.333578 | orchestrator | 06:51:39.333 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=6cc438e1-0ca2-4ae5-90bc-25cf54c9d604] 2025-09-23 06:51:39.343256 | orchestrator | 06:51:39.342 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=8ba3c17a-eb80-4948-8dbf-766c30daa51c] 2025-09-23 06:51:39.356602 | orchestrator | 06:51:39.356 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=7e55276b-9f20-4253-94e7-5773ee8b5269] 2025-09-23 06:51:39.359269 | orchestrator | 06:51:39.359 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=3af071d7-e94b-4fe1-887b-7ba730e7037a] 2025-09-23 06:51:40.527094 | orchestrator | 06:51:40.526 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 4s [id=2d9fd4ef-8b07-48df-b338-0d5f88747331] 2025-09-23 06:51:40.533098 | orchestrator | 06:51:40.532 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-09-23 06:51:40.534195 | orchestrator | 06:51:40.533 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-09-23 06:51:40.535392 | orchestrator | 06:51:40.535 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-09-23 06:51:40.801741 | orchestrator | 06:51:40.801 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=4d4c1a72-7143-4d43-a5d2-7388d1d71125] 2025-09-23 06:51:40.814908 | orchestrator | 06:51:40.814 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-09-23 06:51:40.822398 | orchestrator | 06:51:40.822 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-09-23 06:51:40.824066 | orchestrator | 06:51:40.823 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-09-23 06:51:40.824499 | orchestrator | 06:51:40.824 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-09-23 06:51:40.825358 | orchestrator | 06:51:40.825 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-09-23 06:51:40.825389 | orchestrator | 06:51:40.825 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-09-23 06:51:40.827007 | orchestrator | 06:51:40.826 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-09-23 06:51:40.836026 | orchestrator | 06:51:40.835 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-09-23 06:51:40.836462 | orchestrator | 06:51:40.836 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=a9c1cbab-7249-48f1-bf55-265131b18126] 2025-09-23 06:51:40.851272 | orchestrator | 06:51:40.851 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-09-23 06:51:41.160756 | orchestrator | 06:51:41.160 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=35830ba5-b515-4dd4-9373-2c962f33fefa] 2025-09-23 06:51:41.176453 | orchestrator | 06:51:41.176 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-09-23 06:51:41.369201 | orchestrator | 06:51:41.368 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=0a86589c-68db-4768-8f3c-3ae233d1da3d] 2025-09-23 06:51:41.379214 | orchestrator | 06:51:41.378 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-09-23 06:51:41.543729 | orchestrator | 06:51:41.543 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=1b890fd6-14a0-408a-8b8e-cb0d9813c74f] 2025-09-23 06:51:41.549925 | orchestrator | 06:51:41.549 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-09-23 06:51:41.605053 | orchestrator | 06:51:41.604 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=d5b1cadc-6398-441d-8558-ab63032af085] 2025-09-23 06:51:41.612039 | orchestrator | 06:51:41.611 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-09-23 06:51:41.617771 | orchestrator | 06:51:41.617 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=0b87366a-b671-4eb1-a1c9-d327e4727a68] 2025-09-23 06:51:41.622887 | orchestrator | 06:51:41.622 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-09-23 06:51:41.672083 | orchestrator | 06:51:41.671 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=a35f0cf2-033b-47cd-8766-95bf4b3d8946] 2025-09-23 06:51:41.678570 | orchestrator | 06:51:41.678 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-09-23 06:51:41.684961 | orchestrator | 06:51:41.684 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=3fac347f-f2d3-44b9-9e1e-357a2d3ac792] 2025-09-23 06:51:41.700206 | orchestrator | 06:51:41.699 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-09-23 06:51:41.750577 | orchestrator | 06:51:41.750 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=e430c879-ccb1-4a2f-849d-c7bffaf9be82] 2025-09-23 06:51:41.796648 | orchestrator | 06:51:41.796 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=173bc8cd-01bf-48fe-8b0c-3e4c98c7fb87] 2025-09-23 06:51:41.805013 | orchestrator | 06:51:41.804 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=cf802179-2b90-43d7-b92c-bc1d430291f8] 2025-09-23 06:51:41.882139 | orchestrator | 06:51:41.881 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=bbdf43a3-9f85-477d-b40e-607bc7144676] 2025-09-23 06:51:41.966322 | orchestrator | 06:51:41.965 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=99a0efed-25a6-4f90-8886-00a260754ee0] 2025-09-23 06:51:42.075531 | orchestrator | 06:51:42.075 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=4cfb2ae8-3669-492b-ae66-e0bc24849f96] 2025-09-23 06:51:42.146607 | orchestrator | 06:51:42.146 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=1e31da89-925d-4d2a-b17c-dc2529e4da2f] 2025-09-23 06:51:42.301581 | orchestrator | 06:51:42.301 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=24752f9e-4be6-47ea-b26b-ac5e0e94f12a] 2025-09-23 06:51:42.674900 | orchestrator | 06:51:42.674 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=41a236f7-f9fd-4a48-b22c-fbea42bab555] 2025-09-23 06:51:43.518511 | orchestrator | 06:51:43.518 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=9882f99e-f21c-4651-b9db-a043122abff7] 2025-09-23 06:51:43.537825 | orchestrator | 06:51:43.537 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-09-23 06:51:43.552386 | orchestrator | 06:51:43.552 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-09-23 06:51:43.557101 | orchestrator | 06:51:43.556 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-09-23 06:51:43.558768 | orchestrator | 06:51:43.558 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-09-23 06:51:43.561937 | orchestrator | 06:51:43.561 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-09-23 06:51:43.569891 | orchestrator | 06:51:43.569 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-09-23 06:51:43.574231 | orchestrator | 06:51:43.574 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-09-23 06:51:44.888917 | orchestrator | 06:51:44.888 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=d847cceb-82d3-48fa-8cbd-147cbd8fe897] 2025-09-23 06:51:44.906097 | orchestrator | 06:51:44.902 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-09-23 06:51:44.909864 | orchestrator | 06:51:44.909 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-09-23 06:51:44.909915 | orchestrator | 06:51:44.909 STDOUT terraform: local_file.inventory: Creating... 2025-09-23 06:51:44.912890 | orchestrator | 06:51:44.912 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=52c2e56f981542713c4334c1cb7b5ba45f373cde] 2025-09-23 06:51:44.915476 | orchestrator | 06:51:44.915 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=81819e6834739a78d6410e96fda5cc8bcfada749] 2025-09-23 06:51:46.285398 | orchestrator | 06:51:46.285 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=d847cceb-82d3-48fa-8cbd-147cbd8fe897] 2025-09-23 06:51:53.556428 | orchestrator | 06:51:53.556 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-09-23 06:51:53.559068 | orchestrator | 06:51:53.558 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-09-23 06:51:53.560989 | orchestrator | 06:51:53.560 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-09-23 06:51:53.566323 | orchestrator | 06:51:53.566 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-09-23 06:51:53.573052 | orchestrator | 06:51:53.572 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-09-23 06:51:53.573137 | orchestrator | 06:51:53.572 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-09-23 06:52:03.556863 | orchestrator | 06:52:03.556 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-09-23 06:52:03.560036 | orchestrator | 06:52:03.559 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-09-23 06:52:03.561174 | orchestrator | 06:52:03.560 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-09-23 06:52:03.567388 | orchestrator | 06:52:03.567 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-09-23 06:52:03.573782 | orchestrator | 06:52:03.573 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-09-23 06:52:03.573883 | orchestrator | 06:52:03.573 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-09-23 06:52:13.557716 | orchestrator | 06:52:13.557 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-09-23 06:52:13.560921 | orchestrator | 06:52:13.560 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-09-23 06:52:13.561999 | orchestrator | 06:52:13.561 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-09-23 06:52:13.568531 | orchestrator | 06:52:13.568 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-09-23 06:52:13.574744 | orchestrator | 06:52:13.574 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-09-23 06:52:13.574808 | orchestrator | 06:52:13.574 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-09-23 06:52:13.996797 | orchestrator | 06:52:13.996 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 30s [id=c3b5b9d7-165a-4ac1-85b7-b284febcec88] 2025-09-23 06:52:14.105586 | orchestrator | 06:52:14.105 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 30s [id=97b5ef45-0f8f-4249-8306-567cd19df057] 2025-09-23 06:52:14.130863 | orchestrator | 06:52:14.130 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 30s [id=d7b873ec-69fe-4672-b881-4a23d4729932] 2025-09-23 06:52:23.562398 | orchestrator | 06:52:23.562 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2025-09-23 06:52:23.575521 | orchestrator | 06:52:23.575 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2025-09-23 06:52:23.575585 | orchestrator | 06:52:23.575 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2025-09-23 06:52:24.190384 | orchestrator | 06:52:24.190 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 40s [id=8c147803-ffeb-45f6-a7f2-85f4d6e787f4] 2025-09-23 06:52:24.290237 | orchestrator | 06:52:24.289 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 40s [id=298fb469-c8a2-432d-a985-c2bbda422c9e] 2025-09-23 06:52:24.558122 | orchestrator | 06:52:24.557 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 41s [id=6e2dd190-5fcc-4213-9386-dc6f3cddca93] 2025-09-23 06:52:24.591292 | orchestrator | 06:52:24.591 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-09-23 06:52:24.592252 | orchestrator | 06:52:24.592 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-09-23 06:52:24.607555 | orchestrator | 06:52:24.607 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-09-23 06:52:24.609639 | orchestrator | 06:52:24.609 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-09-23 06:52:24.621878 | orchestrator | 06:52:24.621 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-09-23 06:52:24.628240 | orchestrator | 06:52:24.628 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=891703767193009374] 2025-09-23 06:52:24.656202 | orchestrator | 06:52:24.655 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-09-23 06:52:24.661210 | orchestrator | 06:52:24.660 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-09-23 06:52:24.665793 | orchestrator | 06:52:24.665 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-09-23 06:52:24.679286 | orchestrator | 06:52:24.677 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-09-23 06:52:24.680402 | orchestrator | 06:52:24.679 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-09-23 06:52:24.702713 | orchestrator | 06:52:24.702 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-09-23 06:52:27.977579 | orchestrator | 06:52:27.977 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=c3b5b9d7-165a-4ac1-85b7-b284febcec88/7c71f819-4704-4446-9599-7b21db8e3013] 2025-09-23 06:52:27.989249 | orchestrator | 06:52:27.987 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=97b5ef45-0f8f-4249-8306-567cd19df057/87ebb364-ac90-40d8-a46a-ebfab3ab7b91] 2025-09-23 06:52:28.009209 | orchestrator | 06:52:28.009 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=298fb469-c8a2-432d-a985-c2bbda422c9e/c2ff2f17-feac-486a-a8d3-f5343e47e8fb] 2025-09-23 06:52:28.032679 | orchestrator | 06:52:28.032 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=c3b5b9d7-165a-4ac1-85b7-b284febcec88/59088487-bcaf-4b18-9006-b2b85c395676] 2025-09-23 06:52:28.066113 | orchestrator | 06:52:28.065 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=298fb469-c8a2-432d-a985-c2bbda422c9e/b75d5c1f-0301-4e14-8d60-793226b090b6] 2025-09-23 06:52:28.072810 | orchestrator | 06:52:28.072 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=97b5ef45-0f8f-4249-8306-567cd19df057/fd6a0863-0d42-4019-9e23-eb994da62dbd] 2025-09-23 06:52:34.148155 | orchestrator | 06:52:34.147 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 9s [id=c3b5b9d7-165a-4ac1-85b7-b284febcec88/c90ab8a7-6741-4b53-9264-08db4b9d41dd] 2025-09-23 06:52:34.161187 | orchestrator | 06:52:34.160 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 9s [id=97b5ef45-0f8f-4249-8306-567cd19df057/0bff4510-9eaf-4f53-bf1a-5cee4a2246ec] 2025-09-23 06:52:34.176143 | orchestrator | 06:52:34.175 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 9s [id=298fb469-c8a2-432d-a985-c2bbda422c9e/5c88e186-44c4-4f29-a716-3e862e71c173] 2025-09-23 06:52:34.713758 | orchestrator | 06:52:34.713 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-09-23 06:52:44.718968 | orchestrator | 06:52:44.715 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-09-23 06:52:45.045723 | orchestrator | 06:52:45.045 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=ddae8c70-1363-4229-977c-740fa5b6b075] 2025-09-23 06:52:46.429844 | orchestrator | 06:52:46.429 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-09-23 06:52:46.429907 | orchestrator | 06:52:46.429 STDOUT terraform: Outputs: 2025-09-23 06:52:46.429917 | orchestrator | 06:52:46.429 STDOUT terraform: manager_address = 2025-09-23 06:52:46.429924 | orchestrator | 06:52:46.429 STDOUT terraform: private_key = 2025-09-23 06:52:46.851228 | orchestrator | ok: Runtime: 0:01:19.305422 2025-09-23 06:52:46.888435 | 2025-09-23 06:52:46.888612 | TASK [Create infrastructure (stable)] 2025-09-23 06:52:47.421710 | orchestrator | skipping: Conditional result was False 2025-09-23 06:52:47.432001 | 2025-09-23 06:52:47.432133 | TASK [Fetch manager address] 2025-09-23 06:52:47.879798 | orchestrator | ok 2025-09-23 06:52:47.889841 | 2025-09-23 06:52:47.889965 | TASK [Set manager_host address] 2025-09-23 06:52:47.968799 | orchestrator | ok 2025-09-23 06:52:47.978164 | 2025-09-23 06:52:47.978276 | LOOP [Update ansible collections] 2025-09-23 06:52:50.470464 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-23 06:52:50.470916 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-23 06:52:50.470982 | orchestrator | Starting galaxy collection install process 2025-09-23 06:52:50.471022 | orchestrator | Process install dependency map 2025-09-23 06:52:50.471057 | orchestrator | Starting collection install process 2025-09-23 06:52:50.471089 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons' 2025-09-23 06:52:50.471125 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons 2025-09-23 06:52:50.471165 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-09-23 06:52:50.471239 | orchestrator | ok: Item: commons Runtime: 0:00:02.158972 2025-09-23 06:52:51.578124 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-23 06:52:51.578289 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-23 06:52:51.578341 | orchestrator | Starting galaxy collection install process 2025-09-23 06:52:51.578381 | orchestrator | Process install dependency map 2025-09-23 06:52:51.578419 | orchestrator | Starting collection install process 2025-09-23 06:52:51.578477 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services' 2025-09-23 06:52:51.578514 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services 2025-09-23 06:52:51.578547 | orchestrator | osism.services:999.0.0 was installed successfully 2025-09-23 06:52:51.578600 | orchestrator | ok: Item: services Runtime: 0:00:00.825609 2025-09-23 06:52:51.602426 | 2025-09-23 06:52:51.602612 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-23 06:53:02.177415 | orchestrator | ok 2025-09-23 06:53:02.186973 | 2025-09-23 06:53:02.187086 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-23 06:54:02.241454 | orchestrator | ok 2025-09-23 06:54:02.251973 | 2025-09-23 06:54:02.252100 | TASK [Fetch manager ssh hostkey] 2025-09-23 06:54:03.830402 | orchestrator | Output suppressed because no_log was given 2025-09-23 06:54:03.843905 | 2025-09-23 06:54:03.844062 | TASK [Get ssh keypair from terraform environment] 2025-09-23 06:54:04.383456 | orchestrator | ok: Runtime: 0:00:00.014041 2025-09-23 06:54:04.401292 | 2025-09-23 06:54:04.401493 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-23 06:54:04.440339 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-09-23 06:54:04.450275 | 2025-09-23 06:54:04.450390 | TASK [Run manager part 0] 2025-09-23 06:54:05.669518 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-23 06:54:05.863009 | orchestrator | 2025-09-23 06:54:05.863109 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-09-23 06:54:05.863117 | orchestrator | 2025-09-23 06:54:05.863139 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-09-23 06:54:07.942468 | orchestrator | ok: [testbed-manager] 2025-09-23 06:54:07.942537 | orchestrator | 2025-09-23 06:54:07.942562 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-23 06:54:07.942573 | orchestrator | 2025-09-23 06:54:07.942583 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-23 06:54:09.999976 | orchestrator | ok: [testbed-manager] 2025-09-23 06:54:10.000031 | orchestrator | 2025-09-23 06:54:10.000041 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-23 06:54:10.743214 | orchestrator | ok: [testbed-manager] 2025-09-23 06:54:10.743280 | orchestrator | 2025-09-23 06:54:10.743289 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-23 06:54:10.798936 | orchestrator | skipping: [testbed-manager] 2025-09-23 06:54:10.798962 | orchestrator | 2025-09-23 06:54:10.798973 | orchestrator | TASK [Update package cache] **************************************************** 2025-09-23 06:54:10.844892 | orchestrator | skipping: [testbed-manager] 2025-09-23 06:54:10.844916 | orchestrator | 2025-09-23 06:54:10.844923 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-23 06:54:10.892483 | orchestrator | skipping: [testbed-manager] 2025-09-23 06:54:10.892504 | orchestrator | 2025-09-23 06:54:10.892508 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-23 06:54:10.932194 | orchestrator | skipping: [testbed-manager] 2025-09-23 06:54:10.932219 | orchestrator | 2025-09-23 06:54:10.932224 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-23 06:54:10.972762 | orchestrator | skipping: [testbed-manager] 2025-09-23 06:54:10.972782 | orchestrator | 2025-09-23 06:54:10.972788 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-09-23 06:54:11.004991 | orchestrator | skipping: [testbed-manager] 2025-09-23 06:54:11.005054 | orchestrator | 2025-09-23 06:54:11.005066 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-09-23 06:54:11.038908 | orchestrator | skipping: [testbed-manager] 2025-09-23 06:54:11.039000 | orchestrator | 2025-09-23 06:54:11.039019 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-09-23 06:54:11.815015 | orchestrator | changed: [testbed-manager] 2025-09-23 06:54:11.815192 | orchestrator | 2025-09-23 06:54:11.815204 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-09-23 06:56:51.187156 | orchestrator | changed: [testbed-manager] 2025-09-23 06:56:51.187264 | orchestrator | 2025-09-23 06:56:51.187281 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-23 06:58:07.448856 | orchestrator | changed: [testbed-manager] 2025-09-23 06:58:07.448927 | orchestrator | 2025-09-23 06:58:07.448937 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-23 06:58:31.342591 | orchestrator | changed: [testbed-manager] 2025-09-23 06:58:31.342654 | orchestrator | 2025-09-23 06:58:31.342664 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-23 06:58:39.864079 | orchestrator | changed: [testbed-manager] 2025-09-23 06:58:39.864140 | orchestrator | 2025-09-23 06:58:39.864156 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-23 06:58:39.899962 | orchestrator | ok: [testbed-manager] 2025-09-23 06:58:39.899998 | orchestrator | 2025-09-23 06:58:39.900003 | orchestrator | TASK [Get current user] ******************************************************** 2025-09-23 06:58:40.618554 | orchestrator | ok: [testbed-manager] 2025-09-23 06:58:40.618620 | orchestrator | 2025-09-23 06:58:40.618671 | orchestrator | TASK [Create venv directory] *************************************************** 2025-09-23 06:58:41.266566 | orchestrator | changed: [testbed-manager] 2025-09-23 06:58:41.266665 | orchestrator | 2025-09-23 06:58:41.266681 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-09-23 06:58:47.454144 | orchestrator | changed: [testbed-manager] 2025-09-23 06:58:47.454233 | orchestrator | 2025-09-23 06:58:47.454276 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-09-23 06:58:53.580201 | orchestrator | changed: [testbed-manager] 2025-09-23 06:58:53.580266 | orchestrator | 2025-09-23 06:58:53.580284 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-09-23 06:58:56.333595 | orchestrator | changed: [testbed-manager] 2025-09-23 06:58:56.333712 | orchestrator | 2025-09-23 06:58:56.333731 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-09-23 06:58:58.108093 | orchestrator | changed: [testbed-manager] 2025-09-23 06:58:58.108154 | orchestrator | 2025-09-23 06:58:58.108169 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-09-23 06:58:59.258095 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-23 06:58:59.258694 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-23 06:58:59.258720 | orchestrator | 2025-09-23 06:58:59.258733 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-09-23 06:58:59.302226 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-23 06:58:59.302288 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-23 06:58:59.302294 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-23 06:58:59.302300 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-23 06:59:13.519098 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-23 06:59:13.519164 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-23 06:59:13.519173 | orchestrator | 2025-09-23 06:59:13.519181 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-09-23 06:59:14.097718 | orchestrator | changed: [testbed-manager] 2025-09-23 06:59:14.097805 | orchestrator | 2025-09-23 06:59:14.097823 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-09-23 07:01:48.633536 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-09-23 07:01:48.633665 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-09-23 07:01:48.633687 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-09-23 07:01:48.633699 | orchestrator | 2025-09-23 07:01:48.633712 | orchestrator | TASK [Install local collections] *********************************************** 2025-09-23 07:01:50.979302 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-09-23 07:01:50.979371 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-09-23 07:01:50.979385 | orchestrator | 2025-09-23 07:01:50.979397 | orchestrator | PLAY [Create operator user] **************************************************** 2025-09-23 07:01:50.979409 | orchestrator | 2025-09-23 07:01:50.979420 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-23 07:01:52.419441 | orchestrator | ok: [testbed-manager] 2025-09-23 07:01:52.419474 | orchestrator | 2025-09-23 07:01:52.419482 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-23 07:01:52.469585 | orchestrator | ok: [testbed-manager] 2025-09-23 07:01:52.469626 | orchestrator | 2025-09-23 07:01:52.469655 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-23 07:01:52.577173 | orchestrator | ok: [testbed-manager] 2025-09-23 07:01:52.577212 | orchestrator | 2025-09-23 07:01:52.577219 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-23 07:01:53.392049 | orchestrator | changed: [testbed-manager] 2025-09-23 07:01:53.392102 | orchestrator | 2025-09-23 07:01:53.392115 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-23 07:01:54.110811 | orchestrator | changed: [testbed-manager] 2025-09-23 07:01:54.110911 | orchestrator | 2025-09-23 07:01:54.110940 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-23 07:01:55.501139 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-09-23 07:01:55.501282 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-09-23 07:01:55.501296 | orchestrator | 2025-09-23 07:01:55.501324 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-23 07:01:56.918497 | orchestrator | changed: [testbed-manager] 2025-09-23 07:01:56.918608 | orchestrator | 2025-09-23 07:01:56.918625 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-23 07:01:58.713933 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-09-23 07:01:58.713977 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-09-23 07:01:58.713985 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-09-23 07:01:58.713992 | orchestrator | 2025-09-23 07:01:58.714000 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-23 07:01:58.769530 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:01:58.769571 | orchestrator | 2025-09-23 07:01:58.769579 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-23 07:01:59.344451 | orchestrator | changed: [testbed-manager] 2025-09-23 07:01:59.344538 | orchestrator | 2025-09-23 07:01:59.344555 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-23 07:01:59.414840 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:01:59.414900 | orchestrator | 2025-09-23 07:01:59.414906 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-23 07:02:00.240919 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-23 07:02:00.240996 | orchestrator | changed: [testbed-manager] 2025-09-23 07:02:00.241013 | orchestrator | 2025-09-23 07:02:00.241023 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-23 07:02:00.276712 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:02:00.276785 | orchestrator | 2025-09-23 07:02:00.276799 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-23 07:02:00.309644 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:02:00.309704 | orchestrator | 2025-09-23 07:02:00.309713 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-23 07:02:00.335524 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:02:00.335581 | orchestrator | 2025-09-23 07:02:00.335589 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-23 07:02:00.394415 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:02:00.394497 | orchestrator | 2025-09-23 07:02:00.394514 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-23 07:02:01.145491 | orchestrator | ok: [testbed-manager] 2025-09-23 07:02:01.145552 | orchestrator | 2025-09-23 07:02:01.145563 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-23 07:02:01.145573 | orchestrator | 2025-09-23 07:02:01.145582 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-23 07:02:02.547609 | orchestrator | ok: [testbed-manager] 2025-09-23 07:02:02.547693 | orchestrator | 2025-09-23 07:02:02.547699 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-09-23 07:02:03.548489 | orchestrator | changed: [testbed-manager] 2025-09-23 07:02:03.548572 | orchestrator | 2025-09-23 07:02:03.548585 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:02:03.548596 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-09-23 07:02:03.548605 | orchestrator | 2025-09-23 07:02:03.768903 | orchestrator | ok: Runtime: 0:07:58.861176 2025-09-23 07:02:03.787811 | 2025-09-23 07:02:03.787962 | TASK [Point out that the log in on the manager is now possible] 2025-09-23 07:02:03.826614 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-09-23 07:02:03.836415 | 2025-09-23 07:02:03.836542 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-23 07:02:03.868933 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-09-23 07:02:03.877010 | 2025-09-23 07:02:03.877125 | TASK [Run manager part 1 + 2] 2025-09-23 07:02:06.089793 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-23 07:02:06.146856 | orchestrator | 2025-09-23 07:02:06.146906 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-09-23 07:02:06.146913 | orchestrator | 2025-09-23 07:02:06.146927 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-23 07:02:09.063879 | orchestrator | ok: [testbed-manager] 2025-09-23 07:02:09.063914 | orchestrator | 2025-09-23 07:02:09.063934 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-23 07:02:09.101406 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:02:09.101447 | orchestrator | 2025-09-23 07:02:09.101456 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-23 07:02:09.136400 | orchestrator | ok: [testbed-manager] 2025-09-23 07:02:09.136436 | orchestrator | 2025-09-23 07:02:09.136446 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-23 07:02:09.178899 | orchestrator | ok: [testbed-manager] 2025-09-23 07:02:09.178941 | orchestrator | 2025-09-23 07:02:09.178949 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-23 07:02:09.251757 | orchestrator | ok: [testbed-manager] 2025-09-23 07:02:09.251805 | orchestrator | 2025-09-23 07:02:09.251814 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-23 07:02:09.302805 | orchestrator | ok: [testbed-manager] 2025-09-23 07:02:09.302843 | orchestrator | 2025-09-23 07:02:09.302850 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-23 07:02:09.345485 | orchestrator | included: /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-09-23 07:02:09.345516 | orchestrator | 2025-09-23 07:02:09.345521 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-23 07:02:10.008415 | orchestrator | ok: [testbed-manager] 2025-09-23 07:02:10.008461 | orchestrator | 2025-09-23 07:02:10.008471 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-23 07:02:10.052713 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:02:10.052748 | orchestrator | 2025-09-23 07:02:10.052755 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-23 07:02:11.298785 | orchestrator | changed: [testbed-manager] 2025-09-23 07:02:11.298886 | orchestrator | 2025-09-23 07:02:11.298909 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-23 07:02:11.845525 | orchestrator | ok: [testbed-manager] 2025-09-23 07:02:11.845602 | orchestrator | 2025-09-23 07:02:11.845617 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-23 07:02:12.926417 | orchestrator | changed: [testbed-manager] 2025-09-23 07:02:12.926487 | orchestrator | 2025-09-23 07:02:12.926505 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-23 07:02:29.934254 | orchestrator | changed: [testbed-manager] 2025-09-23 07:02:29.934447 | orchestrator | 2025-09-23 07:02:29.934465 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-23 07:02:30.608384 | orchestrator | ok: [testbed-manager] 2025-09-23 07:02:30.608431 | orchestrator | 2025-09-23 07:02:30.608438 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-23 07:02:30.661376 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:02:30.661425 | orchestrator | 2025-09-23 07:02:30.661432 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-09-23 07:02:31.615007 | orchestrator | changed: [testbed-manager] 2025-09-23 07:02:31.615098 | orchestrator | 2025-09-23 07:02:31.615113 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-09-23 07:02:32.631244 | orchestrator | changed: [testbed-manager] 2025-09-23 07:02:32.631336 | orchestrator | 2025-09-23 07:02:32.631352 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-09-23 07:02:33.211659 | orchestrator | changed: [testbed-manager] 2025-09-23 07:02:33.211746 | orchestrator | 2025-09-23 07:02:33.211762 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-09-23 07:02:33.254576 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-23 07:02:33.254703 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-23 07:02:33.254720 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-23 07:02:33.254733 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-23 07:02:39.708988 | orchestrator | changed: [testbed-manager] 2025-09-23 07:02:39.709064 | orchestrator | 2025-09-23 07:02:39.709079 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-09-23 07:02:48.237282 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-09-23 07:02:48.237364 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-09-23 07:02:48.237381 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-09-23 07:02:48.237394 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-09-23 07:02:48.237413 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-09-23 07:02:48.237425 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-09-23 07:02:48.237436 | orchestrator | 2025-09-23 07:02:48.237448 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-09-23 07:02:49.299504 | orchestrator | changed: [testbed-manager] 2025-09-23 07:02:49.299568 | orchestrator | 2025-09-23 07:02:49.299584 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-09-23 07:02:49.342547 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:02:49.342620 | orchestrator | 2025-09-23 07:02:49.342668 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-09-23 07:02:52.550256 | orchestrator | changed: [testbed-manager] 2025-09-23 07:02:52.550342 | orchestrator | 2025-09-23 07:02:52.550357 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-09-23 07:02:52.590537 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:02:52.590578 | orchestrator | 2025-09-23 07:02:52.590586 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-09-23 07:04:31.932026 | orchestrator | changed: [testbed-manager] 2025-09-23 07:04:31.932141 | orchestrator | 2025-09-23 07:04:31.932161 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-23 07:04:33.125781 | orchestrator | ok: [testbed-manager] 2025-09-23 07:04:33.125879 | orchestrator | 2025-09-23 07:04:33.125896 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:04:33.125910 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-09-23 07:04:33.125922 | orchestrator | 2025-09-23 07:04:33.510793 | orchestrator | ok: Runtime: 0:02:29.025810 2025-09-23 07:04:33.528961 | 2025-09-23 07:04:33.529110 | TASK [Reboot manager] 2025-09-23 07:04:35.065398 | orchestrator | ok: Runtime: 0:00:00.957923 2025-09-23 07:04:35.081859 | 2025-09-23 07:04:35.082022 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-23 07:04:51.515420 | orchestrator | ok 2025-09-23 07:04:51.526178 | 2025-09-23 07:04:51.526324 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-23 07:05:51.570579 | orchestrator | ok 2025-09-23 07:05:51.582040 | 2025-09-23 07:05:51.582188 | TASK [Deploy manager + bootstrap nodes] 2025-09-23 07:05:54.298315 | orchestrator | 2025-09-23 07:05:54.298498 | orchestrator | # DEPLOY MANAGER 2025-09-23 07:05:54.298522 | orchestrator | 2025-09-23 07:05:54.298536 | orchestrator | + set -e 2025-09-23 07:05:54.298549 | orchestrator | + echo 2025-09-23 07:05:54.298562 | orchestrator | + echo '# DEPLOY MANAGER' 2025-09-23 07:05:54.298616 | orchestrator | + echo 2025-09-23 07:05:54.298669 | orchestrator | + cat /opt/manager-vars.sh 2025-09-23 07:05:54.301905 | orchestrator | export NUMBER_OF_NODES=6 2025-09-23 07:05:54.301950 | orchestrator | 2025-09-23 07:05:54.301973 | orchestrator | export CEPH_VERSION=reef 2025-09-23 07:05:54.301994 | orchestrator | export CONFIGURATION_VERSION=main 2025-09-23 07:05:54.302068 | orchestrator | export MANAGER_VERSION=latest 2025-09-23 07:05:54.302108 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-09-23 07:05:54.302128 | orchestrator | 2025-09-23 07:05:54.302157 | orchestrator | export ARA=false 2025-09-23 07:05:54.302177 | orchestrator | export DEPLOY_MODE=manager 2025-09-23 07:05:54.302205 | orchestrator | export TEMPEST=false 2025-09-23 07:05:54.302226 | orchestrator | export IS_ZUUL=true 2025-09-23 07:05:54.302246 | orchestrator | 2025-09-23 07:05:54.302276 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2025-09-23 07:05:54.302298 | orchestrator | export EXTERNAL_API=false 2025-09-23 07:05:54.302319 | orchestrator | 2025-09-23 07:05:54.302339 | orchestrator | export IMAGE_USER=ubuntu 2025-09-23 07:05:54.302362 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-09-23 07:05:54.302382 | orchestrator | 2025-09-23 07:05:54.302403 | orchestrator | export CEPH_STACK=ceph-ansible 2025-09-23 07:05:54.302423 | orchestrator | 2025-09-23 07:05:54.302443 | orchestrator | + echo 2025-09-23 07:05:54.302465 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-23 07:05:54.303051 | orchestrator | ++ export INTERACTIVE=false 2025-09-23 07:05:54.303082 | orchestrator | ++ INTERACTIVE=false 2025-09-23 07:05:54.303143 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-23 07:05:54.303167 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-23 07:05:54.303186 | orchestrator | + source /opt/manager-vars.sh 2025-09-23 07:05:54.303205 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-23 07:05:54.303223 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-23 07:05:54.303242 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-23 07:05:54.303261 | orchestrator | ++ CEPH_VERSION=reef 2025-09-23 07:05:54.303287 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-23 07:05:54.303306 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-23 07:05:54.303324 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-23 07:05:54.303342 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-23 07:05:54.303361 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-23 07:05:54.303391 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-23 07:05:54.303409 | orchestrator | ++ export ARA=false 2025-09-23 07:05:54.303427 | orchestrator | ++ ARA=false 2025-09-23 07:05:54.303447 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-23 07:05:54.303464 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-23 07:05:54.303482 | orchestrator | ++ export TEMPEST=false 2025-09-23 07:05:54.303502 | orchestrator | ++ TEMPEST=false 2025-09-23 07:05:54.303520 | orchestrator | ++ export IS_ZUUL=true 2025-09-23 07:05:54.303538 | orchestrator | ++ IS_ZUUL=true 2025-09-23 07:05:54.303556 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2025-09-23 07:05:54.303574 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2025-09-23 07:05:54.303638 | orchestrator | ++ export EXTERNAL_API=false 2025-09-23 07:05:54.303659 | orchestrator | ++ EXTERNAL_API=false 2025-09-23 07:05:54.303678 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-23 07:05:54.303699 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-23 07:05:54.303718 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-23 07:05:54.303738 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-23 07:05:54.303759 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-23 07:05:54.303779 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-23 07:05:54.303799 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-09-23 07:05:54.385440 | orchestrator | + docker version 2025-09-23 07:05:54.689027 | orchestrator | Client: Docker Engine - Community 2025-09-23 07:05:54.689129 | orchestrator | Version: 27.5.1 2025-09-23 07:05:54.689146 | orchestrator | API version: 1.47 2025-09-23 07:05:54.689158 | orchestrator | Go version: go1.22.11 2025-09-23 07:05:54.689169 | orchestrator | Git commit: 9f9e405 2025-09-23 07:05:54.689180 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-23 07:05:54.689193 | orchestrator | OS/Arch: linux/amd64 2025-09-23 07:05:54.689204 | orchestrator | Context: default 2025-09-23 07:05:54.689215 | orchestrator | 2025-09-23 07:05:54.689226 | orchestrator | Server: Docker Engine - Community 2025-09-23 07:05:54.689237 | orchestrator | Engine: 2025-09-23 07:05:54.689249 | orchestrator | Version: 27.5.1 2025-09-23 07:05:54.689260 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-09-23 07:05:54.689305 | orchestrator | Go version: go1.22.11 2025-09-23 07:05:54.689316 | orchestrator | Git commit: 4c9b3b0 2025-09-23 07:05:54.689326 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-23 07:05:54.689337 | orchestrator | OS/Arch: linux/amd64 2025-09-23 07:05:54.689348 | orchestrator | Experimental: false 2025-09-23 07:05:54.689359 | orchestrator | containerd: 2025-09-23 07:05:54.689370 | orchestrator | Version: 1.7.27 2025-09-23 07:05:54.689380 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-09-23 07:05:54.689392 | orchestrator | runc: 2025-09-23 07:05:54.689402 | orchestrator | Version: 1.2.5 2025-09-23 07:05:54.689413 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-09-23 07:05:54.689424 | orchestrator | docker-init: 2025-09-23 07:05:54.689434 | orchestrator | Version: 0.19.0 2025-09-23 07:05:54.689446 | orchestrator | GitCommit: de40ad0 2025-09-23 07:05:54.692173 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-09-23 07:05:54.698381 | orchestrator | + set -e 2025-09-23 07:05:54.698996 | orchestrator | + source /opt/manager-vars.sh 2025-09-23 07:05:54.699015 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-23 07:05:54.699027 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-23 07:05:54.699038 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-23 07:05:54.699049 | orchestrator | ++ CEPH_VERSION=reef 2025-09-23 07:05:54.699060 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-23 07:05:54.699071 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-23 07:05:54.699095 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-23 07:05:54.699107 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-23 07:05:54.699118 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-23 07:05:54.699128 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-23 07:05:54.699139 | orchestrator | ++ export ARA=false 2025-09-23 07:05:54.699150 | orchestrator | ++ ARA=false 2025-09-23 07:05:54.699161 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-23 07:05:54.699172 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-23 07:05:54.699182 | orchestrator | ++ export TEMPEST=false 2025-09-23 07:05:54.699193 | orchestrator | ++ TEMPEST=false 2025-09-23 07:05:54.699204 | orchestrator | ++ export IS_ZUUL=true 2025-09-23 07:05:54.699214 | orchestrator | ++ IS_ZUUL=true 2025-09-23 07:05:54.699225 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2025-09-23 07:05:54.699236 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2025-09-23 07:05:54.699247 | orchestrator | ++ export EXTERNAL_API=false 2025-09-23 07:05:54.699258 | orchestrator | ++ EXTERNAL_API=false 2025-09-23 07:05:54.699269 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-23 07:05:54.699279 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-23 07:05:54.699290 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-23 07:05:54.699301 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-23 07:05:54.699312 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-23 07:05:54.699322 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-23 07:05:54.699333 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-23 07:05:54.699344 | orchestrator | ++ export INTERACTIVE=false 2025-09-23 07:05:54.699355 | orchestrator | ++ INTERACTIVE=false 2025-09-23 07:05:54.699366 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-23 07:05:54.699381 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-23 07:05:54.699392 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-23 07:05:54.699403 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-23 07:05:54.699414 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-09-23 07:05:54.706765 | orchestrator | + set -e 2025-09-23 07:05:54.706817 | orchestrator | + VERSION=reef 2025-09-23 07:05:54.707728 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-23 07:05:54.714088 | orchestrator | + [[ -n ceph_version: reef ]] 2025-09-23 07:05:54.714124 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-09-23 07:05:54.719848 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-09-23 07:05:54.726389 | orchestrator | + set -e 2025-09-23 07:05:54.726456 | orchestrator | + VERSION=2024.2 2025-09-23 07:05:54.727080 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-23 07:05:54.731146 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-09-23 07:05:54.731184 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-09-23 07:05:54.736047 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-09-23 07:05:54.736224 | orchestrator | ++ semver latest 7.0.0 2025-09-23 07:05:54.802150 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-23 07:05:54.802261 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-23 07:05:54.802279 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-09-23 07:05:54.802294 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-09-23 07:05:54.894452 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-23 07:05:54.895829 | orchestrator | + source /opt/venv/bin/activate 2025-09-23 07:05:54.896817 | orchestrator | ++ deactivate nondestructive 2025-09-23 07:05:54.896835 | orchestrator | ++ '[' -n '' ']' 2025-09-23 07:05:54.896848 | orchestrator | ++ '[' -n '' ']' 2025-09-23 07:05:54.896859 | orchestrator | ++ hash -r 2025-09-23 07:05:54.896875 | orchestrator | ++ '[' -n '' ']' 2025-09-23 07:05:54.896886 | orchestrator | ++ unset VIRTUAL_ENV 2025-09-23 07:05:54.896897 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-09-23 07:05:54.896908 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-09-23 07:05:54.896919 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-09-23 07:05:54.896932 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-09-23 07:05:54.896943 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-09-23 07:05:54.896954 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-09-23 07:05:54.896966 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-23 07:05:54.896977 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-23 07:05:54.896988 | orchestrator | ++ export PATH 2025-09-23 07:05:54.896999 | orchestrator | ++ '[' -n '' ']' 2025-09-23 07:05:54.897039 | orchestrator | ++ '[' -z '' ']' 2025-09-23 07:05:54.897051 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-09-23 07:05:54.897062 | orchestrator | ++ PS1='(venv) ' 2025-09-23 07:05:54.897138 | orchestrator | ++ export PS1 2025-09-23 07:05:54.897151 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-09-23 07:05:54.897176 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-09-23 07:05:54.897187 | orchestrator | ++ hash -r 2025-09-23 07:05:54.897230 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-09-23 07:05:56.198792 | orchestrator | 2025-09-23 07:05:56.198892 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-09-23 07:05:56.198909 | orchestrator | 2025-09-23 07:05:56.198921 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-23 07:05:56.814317 | orchestrator | ok: [testbed-manager] 2025-09-23 07:05:56.814426 | orchestrator | 2025-09-23 07:05:56.814442 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-23 07:05:57.797265 | orchestrator | changed: [testbed-manager] 2025-09-23 07:05:57.797377 | orchestrator | 2025-09-23 07:05:57.797393 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-09-23 07:05:57.797406 | orchestrator | 2025-09-23 07:05:57.797418 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-23 07:06:01.195429 | orchestrator | ok: [testbed-manager] 2025-09-23 07:06:01.195550 | orchestrator | 2025-09-23 07:06:01.195569 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-09-23 07:06:01.246489 | orchestrator | ok: [testbed-manager] 2025-09-23 07:06:01.246571 | orchestrator | 2025-09-23 07:06:01.246622 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-09-23 07:06:01.699743 | orchestrator | changed: [testbed-manager] 2025-09-23 07:06:01.699851 | orchestrator | 2025-09-23 07:06:01.699867 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-09-23 07:06:01.743435 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:06:01.743544 | orchestrator | 2025-09-23 07:06:01.743562 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-23 07:06:02.077921 | orchestrator | changed: [testbed-manager] 2025-09-23 07:06:02.078941 | orchestrator | 2025-09-23 07:06:02.078977 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-09-23 07:06:02.134681 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:06:02.134776 | orchestrator | 2025-09-23 07:06:02.134792 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-09-23 07:06:02.480343 | orchestrator | ok: [testbed-manager] 2025-09-23 07:06:02.480442 | orchestrator | 2025-09-23 07:06:02.480458 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-09-23 07:06:02.595851 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:06:02.595949 | orchestrator | 2025-09-23 07:06:02.595973 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-09-23 07:06:02.595994 | orchestrator | 2025-09-23 07:06:02.596017 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-23 07:06:04.286151 | orchestrator | ok: [testbed-manager] 2025-09-23 07:06:04.286249 | orchestrator | 2025-09-23 07:06:04.286266 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-09-23 07:06:04.388230 | orchestrator | included: osism.services.traefik for testbed-manager 2025-09-23 07:06:04.388339 | orchestrator | 2025-09-23 07:06:04.388354 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-09-23 07:06:04.444840 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-09-23 07:06:04.444939 | orchestrator | 2025-09-23 07:06:04.444952 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-09-23 07:06:05.561247 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-09-23 07:06:05.561360 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-09-23 07:06:05.561387 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-09-23 07:06:05.561409 | orchestrator | 2025-09-23 07:06:05.561426 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-09-23 07:06:07.289466 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-09-23 07:06:07.289567 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-09-23 07:06:07.289620 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-09-23 07:06:07.289644 | orchestrator | 2025-09-23 07:06:07.290324 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-09-23 07:06:07.869246 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-23 07:06:07.869312 | orchestrator | changed: [testbed-manager] 2025-09-23 07:06:07.869323 | orchestrator | 2025-09-23 07:06:07.869333 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-09-23 07:06:08.443996 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-23 07:06:08.444093 | orchestrator | changed: [testbed-manager] 2025-09-23 07:06:08.444117 | orchestrator | 2025-09-23 07:06:08.444133 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-09-23 07:06:08.495110 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:06:08.495180 | orchestrator | 2025-09-23 07:06:08.495191 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-09-23 07:06:08.819447 | orchestrator | ok: [testbed-manager] 2025-09-23 07:06:08.819528 | orchestrator | 2025-09-23 07:06:08.819542 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-09-23 07:06:08.897878 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-09-23 07:06:08.897946 | orchestrator | 2025-09-23 07:06:08.897958 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-09-23 07:06:09.858350 | orchestrator | changed: [testbed-manager] 2025-09-23 07:06:09.858435 | orchestrator | 2025-09-23 07:06:09.858449 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-09-23 07:06:10.613779 | orchestrator | changed: [testbed-manager] 2025-09-23 07:06:10.613866 | orchestrator | 2025-09-23 07:06:10.613882 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-09-23 07:06:21.238264 | orchestrator | changed: [testbed-manager] 2025-09-23 07:06:21.238350 | orchestrator | 2025-09-23 07:06:21.238360 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-09-23 07:06:21.302511 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:06:21.302691 | orchestrator | 2025-09-23 07:06:21.302726 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-09-23 07:06:21.302751 | orchestrator | 2025-09-23 07:06:21.302774 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-23 07:06:23.158703 | orchestrator | ok: [testbed-manager] 2025-09-23 07:06:23.158803 | orchestrator | 2025-09-23 07:06:23.158850 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-09-23 07:06:23.294213 | orchestrator | included: osism.services.manager for testbed-manager 2025-09-23 07:06:23.294308 | orchestrator | 2025-09-23 07:06:23.294323 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-09-23 07:06:23.354459 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-09-23 07:06:23.354604 | orchestrator | 2025-09-23 07:06:23.354623 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-09-23 07:06:25.965345 | orchestrator | ok: [testbed-manager] 2025-09-23 07:06:25.965454 | orchestrator | 2025-09-23 07:06:25.965472 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-09-23 07:06:26.021818 | orchestrator | ok: [testbed-manager] 2025-09-23 07:06:26.021931 | orchestrator | 2025-09-23 07:06:26.021950 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-09-23 07:06:26.162995 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-09-23 07:06:26.163084 | orchestrator | 2025-09-23 07:06:26.163096 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-09-23 07:06:29.076668 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-09-23 07:06:29.076758 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-09-23 07:06:29.076770 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-09-23 07:06:29.076781 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-09-23 07:06:29.076792 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-09-23 07:06:29.076802 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-09-23 07:06:29.076812 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-09-23 07:06:29.076822 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-09-23 07:06:29.076831 | orchestrator | 2025-09-23 07:06:29.076842 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-09-23 07:06:29.725571 | orchestrator | changed: [testbed-manager] 2025-09-23 07:06:29.725696 | orchestrator | 2025-09-23 07:06:29.725711 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-09-23 07:06:30.397917 | orchestrator | changed: [testbed-manager] 2025-09-23 07:06:30.398076 | orchestrator | 2025-09-23 07:06:30.398091 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-09-23 07:06:30.472499 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-09-23 07:06:30.472676 | orchestrator | 2025-09-23 07:06:30.472706 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-09-23 07:06:31.676882 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-09-23 07:06:31.677001 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-09-23 07:06:31.677019 | orchestrator | 2025-09-23 07:06:31.677033 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-09-23 07:06:32.345187 | orchestrator | changed: [testbed-manager] 2025-09-23 07:06:32.345286 | orchestrator | 2025-09-23 07:06:32.345309 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-09-23 07:06:32.404350 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:06:32.404431 | orchestrator | 2025-09-23 07:06:32.404442 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-09-23 07:06:32.483398 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2025-09-23 07:06:32.483491 | orchestrator | 2025-09-23 07:06:32.483507 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2025-09-23 07:06:33.108795 | orchestrator | changed: [testbed-manager] 2025-09-23 07:06:33.108889 | orchestrator | 2025-09-23 07:06:33.108904 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-09-23 07:06:33.183084 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-09-23 07:06:33.183191 | orchestrator | 2025-09-23 07:06:33.183203 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-09-23 07:06:34.596170 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-23 07:06:34.596251 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-23 07:06:34.596262 | orchestrator | changed: [testbed-manager] 2025-09-23 07:06:34.596272 | orchestrator | 2025-09-23 07:06:34.596280 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-09-23 07:06:35.220035 | orchestrator | changed: [testbed-manager] 2025-09-23 07:06:35.220119 | orchestrator | 2025-09-23 07:06:35.220130 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-09-23 07:06:35.282121 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:06:35.282222 | orchestrator | 2025-09-23 07:06:35.282238 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-09-23 07:06:35.386503 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-09-23 07:06:35.386612 | orchestrator | 2025-09-23 07:06:35.386623 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-09-23 07:06:35.930692 | orchestrator | changed: [testbed-manager] 2025-09-23 07:06:35.930786 | orchestrator | 2025-09-23 07:06:35.930799 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-09-23 07:06:36.365318 | orchestrator | changed: [testbed-manager] 2025-09-23 07:06:36.365405 | orchestrator | 2025-09-23 07:06:36.365417 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-09-23 07:06:37.610552 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-09-23 07:06:37.610694 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-09-23 07:06:37.610710 | orchestrator | 2025-09-23 07:06:37.610723 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-09-23 07:06:38.267037 | orchestrator | changed: [testbed-manager] 2025-09-23 07:06:38.267145 | orchestrator | 2025-09-23 07:06:38.267166 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-09-23 07:06:38.668051 | orchestrator | ok: [testbed-manager] 2025-09-23 07:06:38.668143 | orchestrator | 2025-09-23 07:06:38.668158 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-09-23 07:06:39.028831 | orchestrator | changed: [testbed-manager] 2025-09-23 07:06:39.028927 | orchestrator | 2025-09-23 07:06:39.028942 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-09-23 07:06:39.074228 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:06:39.074322 | orchestrator | 2025-09-23 07:06:39.074336 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-09-23 07:06:39.148441 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-09-23 07:06:39.148539 | orchestrator | 2025-09-23 07:06:39.148554 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-09-23 07:06:39.202171 | orchestrator | ok: [testbed-manager] 2025-09-23 07:06:39.202263 | orchestrator | 2025-09-23 07:06:39.202277 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-09-23 07:06:41.320489 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-09-23 07:06:41.320633 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-09-23 07:06:41.320650 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-09-23 07:06:41.320662 | orchestrator | 2025-09-23 07:06:41.320675 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-09-23 07:06:42.068490 | orchestrator | changed: [testbed-manager] 2025-09-23 07:06:42.068625 | orchestrator | 2025-09-23 07:06:42.068646 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-09-23 07:06:42.767351 | orchestrator | changed: [testbed-manager] 2025-09-23 07:06:42.767461 | orchestrator | 2025-09-23 07:06:42.767478 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-09-23 07:06:43.490277 | orchestrator | changed: [testbed-manager] 2025-09-23 07:06:43.490368 | orchestrator | 2025-09-23 07:06:43.490382 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-09-23 07:06:43.566078 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-09-23 07:06:43.566177 | orchestrator | 2025-09-23 07:06:43.566192 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-09-23 07:06:43.622922 | orchestrator | ok: [testbed-manager] 2025-09-23 07:06:43.623012 | orchestrator | 2025-09-23 07:06:43.623027 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-09-23 07:06:44.354242 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-09-23 07:06:44.354342 | orchestrator | 2025-09-23 07:06:44.354357 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-09-23 07:06:44.440030 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-09-23 07:06:44.440126 | orchestrator | 2025-09-23 07:06:44.440140 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-09-23 07:06:45.150287 | orchestrator | changed: [testbed-manager] 2025-09-23 07:06:45.150397 | orchestrator | 2025-09-23 07:06:45.150421 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-09-23 07:06:45.740873 | orchestrator | ok: [testbed-manager] 2025-09-23 07:06:45.740975 | orchestrator | 2025-09-23 07:06:45.740990 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-09-23 07:06:45.799249 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:06:45.799365 | orchestrator | 2025-09-23 07:06:45.799410 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-09-23 07:06:45.854631 | orchestrator | ok: [testbed-manager] 2025-09-23 07:06:45.854733 | orchestrator | 2025-09-23 07:06:45.854749 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-09-23 07:06:46.726681 | orchestrator | changed: [testbed-manager] 2025-09-23 07:06:46.726782 | orchestrator | 2025-09-23 07:06:46.726798 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-09-23 07:07:54.603483 | orchestrator | changed: [testbed-manager] 2025-09-23 07:07:54.603685 | orchestrator | 2025-09-23 07:07:54.603712 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-09-23 07:07:55.660999 | orchestrator | ok: [testbed-manager] 2025-09-23 07:07:55.661089 | orchestrator | 2025-09-23 07:07:55.661103 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-09-23 07:07:55.800086 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:07:55.800168 | orchestrator | 2025-09-23 07:07:55.800180 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-09-23 07:07:58.227447 | orchestrator | changed: [testbed-manager] 2025-09-23 07:07:58.227636 | orchestrator | 2025-09-23 07:07:58.227670 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-09-23 07:07:58.287721 | orchestrator | ok: [testbed-manager] 2025-09-23 07:07:58.287818 | orchestrator | 2025-09-23 07:07:58.287833 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-23 07:07:58.287845 | orchestrator | 2025-09-23 07:07:58.287855 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-09-23 07:07:58.335255 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:07:58.335355 | orchestrator | 2025-09-23 07:07:58.335370 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-09-23 07:08:58.386507 | orchestrator | Pausing for 60 seconds 2025-09-23 07:08:58.386677 | orchestrator | changed: [testbed-manager] 2025-09-23 07:08:58.386696 | orchestrator | 2025-09-23 07:08:58.386710 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-09-23 07:09:02.624696 | orchestrator | changed: [testbed-manager] 2025-09-23 07:09:02.624829 | orchestrator | 2025-09-23 07:09:02.624856 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-09-23 07:09:44.248846 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-09-23 07:09:44.248963 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-09-23 07:09:44.248979 | orchestrator | changed: [testbed-manager] 2025-09-23 07:09:44.249021 | orchestrator | 2025-09-23 07:09:44.249034 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-09-23 07:09:54.309459 | orchestrator | changed: [testbed-manager] 2025-09-23 07:09:54.309757 | orchestrator | 2025-09-23 07:09:54.309796 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-09-23 07:09:54.401715 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-09-23 07:09:54.401826 | orchestrator | 2025-09-23 07:09:54.401842 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-23 07:09:54.401854 | orchestrator | 2025-09-23 07:09:54.401866 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-09-23 07:09:54.449544 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:09:54.449701 | orchestrator | 2025-09-23 07:09:54.449716 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2025-09-23 07:09:54.515020 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2025-09-23 07:09:54.515132 | orchestrator | 2025-09-23 07:09:54.515154 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2025-09-23 07:09:55.290998 | orchestrator | changed: [testbed-manager] 2025-09-23 07:09:55.291097 | orchestrator | 2025-09-23 07:09:55.291113 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2025-09-23 07:09:59.144218 | orchestrator | ok: [testbed-manager] 2025-09-23 07:09:59.144305 | orchestrator | 2025-09-23 07:09:59.144317 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2025-09-23 07:09:59.213989 | orchestrator | ok: [testbed-manager] => { 2025-09-23 07:09:59.214086 | orchestrator | "version_check_result.stdout_lines": [ 2025-09-23 07:09:59.214100 | orchestrator | "=== OSISM Container Version Check ===", 2025-09-23 07:09:59.214110 | orchestrator | "Checking running containers against expected versions...", 2025-09-23 07:09:59.214119 | orchestrator | "", 2025-09-23 07:09:59.214128 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2025-09-23 07:09:59.214137 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2025-09-23 07:09:59.214146 | orchestrator | " Enabled: true", 2025-09-23 07:09:59.214155 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2025-09-23 07:09:59.214163 | orchestrator | " Status: ✅ MATCH", 2025-09-23 07:09:59.214172 | orchestrator | "", 2025-09-23 07:09:59.214181 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2025-09-23 07:09:59.214191 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2025-09-23 07:09:59.214199 | orchestrator | " Enabled: true", 2025-09-23 07:09:59.214208 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2025-09-23 07:09:59.214217 | orchestrator | " Status: ✅ MATCH", 2025-09-23 07:09:59.214225 | orchestrator | "", 2025-09-23 07:09:59.214234 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2025-09-23 07:09:59.214243 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2025-09-23 07:09:59.214251 | orchestrator | " Enabled: true", 2025-09-23 07:09:59.214260 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2025-09-23 07:09:59.214268 | orchestrator | " Status: ✅ MATCH", 2025-09-23 07:09:59.214277 | orchestrator | "", 2025-09-23 07:09:59.214286 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2025-09-23 07:09:59.214295 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2025-09-23 07:09:59.214303 | orchestrator | " Enabled: true", 2025-09-23 07:09:59.214312 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2025-09-23 07:09:59.214321 | orchestrator | " Status: ✅ MATCH", 2025-09-23 07:09:59.214330 | orchestrator | "", 2025-09-23 07:09:59.214338 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2025-09-23 07:09:59.214347 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2025-09-23 07:09:59.214379 | orchestrator | " Enabled: true", 2025-09-23 07:09:59.214388 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2025-09-23 07:09:59.214397 | orchestrator | " Status: ✅ MATCH", 2025-09-23 07:09:59.214405 | orchestrator | "", 2025-09-23 07:09:59.214414 | orchestrator | "Checking service: osismclient (OSISM Client)", 2025-09-23 07:09:59.214423 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-09-23 07:09:59.214431 | orchestrator | " Enabled: true", 2025-09-23 07:09:59.214440 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-09-23 07:09:59.214448 | orchestrator | " Status: ✅ MATCH", 2025-09-23 07:09:59.214457 | orchestrator | "", 2025-09-23 07:09:59.214466 | orchestrator | "Checking service: ara-server (ARA Server)", 2025-09-23 07:09:59.214474 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2025-09-23 07:09:59.214483 | orchestrator | " Enabled: true", 2025-09-23 07:09:59.214491 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2025-09-23 07:09:59.214500 | orchestrator | " Status: ✅ MATCH", 2025-09-23 07:09:59.214508 | orchestrator | "", 2025-09-23 07:09:59.214517 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2025-09-23 07:09:59.214530 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.3", 2025-09-23 07:09:59.214538 | orchestrator | " Enabled: true", 2025-09-23 07:09:59.214595 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.3", 2025-09-23 07:09:59.214606 | orchestrator | " Status: ✅ MATCH", 2025-09-23 07:09:59.214616 | orchestrator | "", 2025-09-23 07:09:59.214627 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2025-09-23 07:09:59.214638 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2025-09-23 07:09:59.214648 | orchestrator | " Enabled: true", 2025-09-23 07:09:59.214664 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2025-09-23 07:09:59.214674 | orchestrator | " Status: ✅ MATCH", 2025-09-23 07:09:59.214685 | orchestrator | "", 2025-09-23 07:09:59.214695 | orchestrator | "Checking service: redis (Redis Cache)", 2025-09-23 07:09:59.214705 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.5-alpine", 2025-09-23 07:09:59.214715 | orchestrator | " Enabled: true", 2025-09-23 07:09:59.214725 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.5-alpine", 2025-09-23 07:09:59.214735 | orchestrator | " Status: ✅ MATCH", 2025-09-23 07:09:59.214745 | orchestrator | "", 2025-09-23 07:09:59.214755 | orchestrator | "Checking service: api (OSISM API Service)", 2025-09-23 07:09:59.214765 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-09-23 07:09:59.214775 | orchestrator | " Enabled: true", 2025-09-23 07:09:59.214786 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-09-23 07:09:59.214796 | orchestrator | " Status: ✅ MATCH", 2025-09-23 07:09:59.214806 | orchestrator | "", 2025-09-23 07:09:59.214816 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2025-09-23 07:09:59.214826 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-09-23 07:09:59.214836 | orchestrator | " Enabled: true", 2025-09-23 07:09:59.214846 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-09-23 07:09:59.214856 | orchestrator | " Status: ✅ MATCH", 2025-09-23 07:09:59.214866 | orchestrator | "", 2025-09-23 07:09:59.214875 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2025-09-23 07:09:59.214885 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-09-23 07:09:59.214895 | orchestrator | " Enabled: true", 2025-09-23 07:09:59.214905 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-09-23 07:09:59.214916 | orchestrator | " Status: ✅ MATCH", 2025-09-23 07:09:59.214925 | orchestrator | "", 2025-09-23 07:09:59.214935 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2025-09-23 07:09:59.214944 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-09-23 07:09:59.214953 | orchestrator | " Enabled: true", 2025-09-23 07:09:59.214962 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-09-23 07:09:59.214977 | orchestrator | " Status: ✅ MATCH", 2025-09-23 07:09:59.214986 | orchestrator | "", 2025-09-23 07:09:59.214995 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2025-09-23 07:09:59.215016 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-09-23 07:09:59.215025 | orchestrator | " Enabled: true", 2025-09-23 07:09:59.215034 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-09-23 07:09:59.215042 | orchestrator | " Status: ✅ MATCH", 2025-09-23 07:09:59.215051 | orchestrator | "", 2025-09-23 07:09:59.215060 | orchestrator | "=== Summary ===", 2025-09-23 07:09:59.215068 | orchestrator | "Errors (version mismatches): 0", 2025-09-23 07:09:59.215077 | orchestrator | "Warnings (expected containers not running): 0", 2025-09-23 07:09:59.215085 | orchestrator | "", 2025-09-23 07:09:59.215094 | orchestrator | "✅ All running containers match expected versions!" 2025-09-23 07:09:59.215103 | orchestrator | ] 2025-09-23 07:09:59.215112 | orchestrator | } 2025-09-23 07:09:59.215121 | orchestrator | 2025-09-23 07:09:59.215130 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2025-09-23 07:09:59.270730 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:09:59.270778 | orchestrator | 2025-09-23 07:09:59.270788 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:09:59.270800 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-09-23 07:09:59.270809 | orchestrator | 2025-09-23 07:09:59.381049 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-23 07:09:59.381162 | orchestrator | + deactivate 2025-09-23 07:09:59.381179 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-09-23 07:09:59.381942 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-23 07:09:59.381964 | orchestrator | + export PATH 2025-09-23 07:09:59.381977 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-09-23 07:09:59.381989 | orchestrator | + '[' -n '' ']' 2025-09-23 07:09:59.381999 | orchestrator | + hash -r 2025-09-23 07:09:59.382010 | orchestrator | + '[' -n '' ']' 2025-09-23 07:09:59.382072 | orchestrator | + unset VIRTUAL_ENV 2025-09-23 07:09:59.382084 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-09-23 07:09:59.382095 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-09-23 07:09:59.382106 | orchestrator | + unset -f deactivate 2025-09-23 07:09:59.382118 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-09-23 07:09:59.389274 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-23 07:09:59.389329 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-23 07:09:59.389345 | orchestrator | + local max_attempts=60 2025-09-23 07:09:59.389359 | orchestrator | + local name=ceph-ansible 2025-09-23 07:09:59.389370 | orchestrator | + local attempt_num=1 2025-09-23 07:09:59.390313 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-23 07:09:59.422457 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-23 07:09:59.422541 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-23 07:09:59.422606 | orchestrator | + local max_attempts=60 2025-09-23 07:09:59.422627 | orchestrator | + local name=kolla-ansible 2025-09-23 07:09:59.422642 | orchestrator | + local attempt_num=1 2025-09-23 07:09:59.422859 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-23 07:09:59.451181 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-23 07:09:59.451258 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-23 07:09:59.451271 | orchestrator | + local max_attempts=60 2025-09-23 07:09:59.451283 | orchestrator | + local name=osism-ansible 2025-09-23 07:09:59.451294 | orchestrator | + local attempt_num=1 2025-09-23 07:09:59.451774 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-23 07:09:59.476294 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-23 07:09:59.476378 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-23 07:09:59.476392 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-23 07:10:00.210995 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-09-23 07:10:00.400876 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-09-23 07:10:00.400980 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2025-09-23 07:10:00.400990 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2025-09-23 07:10:00.400997 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-09-23 07:10:00.401006 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up About a minute (healthy) 8000/tcp 2025-09-23 07:10:00.401013 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up About a minute (healthy) 2025-09-23 07:10:00.401020 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up About a minute (healthy) 2025-09-23 07:10:00.401046 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up 58 seconds (healthy) 2025-09-23 07:10:00.401056 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up About a minute (healthy) 2025-09-23 07:10:00.401066 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2025-09-23 07:10:00.401076 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up About a minute (healthy) 2025-09-23 07:10:00.401086 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis 2 minutes ago Up About a minute (healthy) 6379/tcp 2025-09-23 07:10:00.401096 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2025-09-23 07:10:00.401105 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up About a minute 192.168.16.5:3000->3000/tcp 2025-09-23 07:10:00.401115 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2025-09-23 07:10:00.401125 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up About a minute (healthy) 2025-09-23 07:10:00.406720 | orchestrator | ++ semver latest 7.0.0 2025-09-23 07:10:00.461992 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-23 07:10:00.462169 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-23 07:10:00.462185 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-09-23 07:10:00.466684 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-09-23 07:10:12.691056 | orchestrator | 2025-09-23 07:10:12 | INFO  | Task b69cefc7-37c2-4f34-bc37-227a7b2d7b3f (resolvconf) was prepared for execution. 2025-09-23 07:10:12.691153 | orchestrator | 2025-09-23 07:10:12 | INFO  | It takes a moment until task b69cefc7-37c2-4f34-bc37-227a7b2d7b3f (resolvconf) has been started and output is visible here. 2025-09-23 07:10:25.801111 | orchestrator | 2025-09-23 07:10:25.801225 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-09-23 07:10:25.801242 | orchestrator | 2025-09-23 07:10:25.801255 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-23 07:10:25.801267 | orchestrator | Tuesday 23 September 2025 07:10:16 +0000 (0:00:00.134) 0:00:00.134 ***** 2025-09-23 07:10:25.801278 | orchestrator | ok: [testbed-manager] 2025-09-23 07:10:25.801289 | orchestrator | 2025-09-23 07:10:25.801301 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-23 07:10:25.801313 | orchestrator | Tuesday 23 September 2025 07:10:19 +0000 (0:00:03.728) 0:00:03.863 ***** 2025-09-23 07:10:25.801324 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:10:25.801336 | orchestrator | 2025-09-23 07:10:25.801347 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-23 07:10:25.801357 | orchestrator | Tuesday 23 September 2025 07:10:20 +0000 (0:00:00.066) 0:00:03.929 ***** 2025-09-23 07:10:25.801369 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-09-23 07:10:25.801380 | orchestrator | 2025-09-23 07:10:25.801391 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-23 07:10:25.801402 | orchestrator | Tuesday 23 September 2025 07:10:20 +0000 (0:00:00.069) 0:00:03.998 ***** 2025-09-23 07:10:25.801423 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-09-23 07:10:25.801435 | orchestrator | 2025-09-23 07:10:25.801446 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-23 07:10:25.801457 | orchestrator | Tuesday 23 September 2025 07:10:20 +0000 (0:00:00.065) 0:00:04.064 ***** 2025-09-23 07:10:25.801467 | orchestrator | ok: [testbed-manager] 2025-09-23 07:10:25.801478 | orchestrator | 2025-09-23 07:10:25.801489 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-23 07:10:25.801500 | orchestrator | Tuesday 23 September 2025 07:10:21 +0000 (0:00:01.051) 0:00:05.115 ***** 2025-09-23 07:10:25.801511 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:10:25.801522 | orchestrator | 2025-09-23 07:10:25.801533 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-23 07:10:25.801581 | orchestrator | Tuesday 23 September 2025 07:10:21 +0000 (0:00:00.065) 0:00:05.181 ***** 2025-09-23 07:10:25.801593 | orchestrator | ok: [testbed-manager] 2025-09-23 07:10:25.801604 | orchestrator | 2025-09-23 07:10:25.801615 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-23 07:10:25.801626 | orchestrator | Tuesday 23 September 2025 07:10:21 +0000 (0:00:00.483) 0:00:05.665 ***** 2025-09-23 07:10:25.801638 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:10:25.801650 | orchestrator | 2025-09-23 07:10:25.801663 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-23 07:10:25.801677 | orchestrator | Tuesday 23 September 2025 07:10:21 +0000 (0:00:00.083) 0:00:05.748 ***** 2025-09-23 07:10:25.801689 | orchestrator | changed: [testbed-manager] 2025-09-23 07:10:25.801702 | orchestrator | 2025-09-23 07:10:25.801714 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-23 07:10:25.801726 | orchestrator | Tuesday 23 September 2025 07:10:22 +0000 (0:00:00.510) 0:00:06.259 ***** 2025-09-23 07:10:25.801738 | orchestrator | changed: [testbed-manager] 2025-09-23 07:10:25.801751 | orchestrator | 2025-09-23 07:10:25.801764 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-23 07:10:25.801776 | orchestrator | Tuesday 23 September 2025 07:10:23 +0000 (0:00:01.061) 0:00:07.320 ***** 2025-09-23 07:10:25.801788 | orchestrator | ok: [testbed-manager] 2025-09-23 07:10:25.801800 | orchestrator | 2025-09-23 07:10:25.801813 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-23 07:10:25.801825 | orchestrator | Tuesday 23 September 2025 07:10:24 +0000 (0:00:00.917) 0:00:08.238 ***** 2025-09-23 07:10:25.801859 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-09-23 07:10:25.801873 | orchestrator | 2025-09-23 07:10:25.801885 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-23 07:10:25.801897 | orchestrator | Tuesday 23 September 2025 07:10:24 +0000 (0:00:00.089) 0:00:08.327 ***** 2025-09-23 07:10:25.801909 | orchestrator | changed: [testbed-manager] 2025-09-23 07:10:25.801921 | orchestrator | 2025-09-23 07:10:25.801933 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:10:25.801947 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-23 07:10:25.801960 | orchestrator | 2025-09-23 07:10:25.801973 | orchestrator | 2025-09-23 07:10:25.801986 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:10:25.801998 | orchestrator | Tuesday 23 September 2025 07:10:25 +0000 (0:00:01.117) 0:00:09.445 ***** 2025-09-23 07:10:25.802010 | orchestrator | =============================================================================== 2025-09-23 07:10:25.802065 | orchestrator | Gathering Facts --------------------------------------------------------- 3.73s 2025-09-23 07:10:25.802076 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.12s 2025-09-23 07:10:25.802087 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.06s 2025-09-23 07:10:25.802097 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.05s 2025-09-23 07:10:25.802108 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.92s 2025-09-23 07:10:25.802121 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.51s 2025-09-23 07:10:25.802154 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.48s 2025-09-23 07:10:25.802166 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-09-23 07:10:25.802176 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-09-23 07:10:25.802187 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.07s 2025-09-23 07:10:25.802198 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-09-23 07:10:25.802209 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2025-09-23 07:10:25.802220 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2025-09-23 07:10:26.071410 | orchestrator | + osism apply sshconfig 2025-09-23 07:10:38.075073 | orchestrator | 2025-09-23 07:10:38 | INFO  | Task 61d6216a-f80a-4e9c-abad-6f4d7664279e (sshconfig) was prepared for execution. 2025-09-23 07:10:38.075188 | orchestrator | 2025-09-23 07:10:38 | INFO  | It takes a moment until task 61d6216a-f80a-4e9c-abad-6f4d7664279e (sshconfig) has been started and output is visible here. 2025-09-23 07:10:49.785707 | orchestrator | 2025-09-23 07:10:49.785847 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-09-23 07:10:49.785876 | orchestrator | 2025-09-23 07:10:49.785894 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-09-23 07:10:49.785912 | orchestrator | Tuesday 23 September 2025 07:10:41 +0000 (0:00:00.173) 0:00:00.173 ***** 2025-09-23 07:10:49.785928 | orchestrator | ok: [testbed-manager] 2025-09-23 07:10:49.785945 | orchestrator | 2025-09-23 07:10:49.785963 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-09-23 07:10:49.785982 | orchestrator | Tuesday 23 September 2025 07:10:42 +0000 (0:00:00.561) 0:00:00.735 ***** 2025-09-23 07:10:49.786000 | orchestrator | changed: [testbed-manager] 2025-09-23 07:10:49.786096 | orchestrator | 2025-09-23 07:10:49.786120 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-09-23 07:10:49.786141 | orchestrator | Tuesday 23 September 2025 07:10:43 +0000 (0:00:00.504) 0:00:01.239 ***** 2025-09-23 07:10:49.786201 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-09-23 07:10:49.786225 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-09-23 07:10:49.786248 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-09-23 07:10:49.786271 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-09-23 07:10:49.786292 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-09-23 07:10:49.786314 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-09-23 07:10:49.786336 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-09-23 07:10:49.786358 | orchestrator | 2025-09-23 07:10:49.786379 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-09-23 07:10:49.786397 | orchestrator | Tuesday 23 September 2025 07:10:48 +0000 (0:00:05.816) 0:00:07.056 ***** 2025-09-23 07:10:49.786417 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:10:49.786438 | orchestrator | 2025-09-23 07:10:49.786458 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-09-23 07:10:49.786479 | orchestrator | Tuesday 23 September 2025 07:10:48 +0000 (0:00:00.050) 0:00:07.106 ***** 2025-09-23 07:10:49.786500 | orchestrator | changed: [testbed-manager] 2025-09-23 07:10:49.786522 | orchestrator | 2025-09-23 07:10:49.786573 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:10:49.786595 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-23 07:10:49.786616 | orchestrator | 2025-09-23 07:10:49.786636 | orchestrator | 2025-09-23 07:10:49.786656 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:10:49.786675 | orchestrator | Tuesday 23 September 2025 07:10:49 +0000 (0:00:00.612) 0:00:07.719 ***** 2025-09-23 07:10:49.786693 | orchestrator | =============================================================================== 2025-09-23 07:10:49.786712 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.82s 2025-09-23 07:10:49.786731 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.61s 2025-09-23 07:10:49.786749 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.56s 2025-09-23 07:10:49.786766 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.50s 2025-09-23 07:10:49.786784 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.05s 2025-09-23 07:10:50.083524 | orchestrator | + osism apply known-hosts 2025-09-23 07:11:02.089024 | orchestrator | 2025-09-23 07:11:02 | INFO  | Task 238b5f71-eb4b-498a-8655-a167e341b295 (known-hosts) was prepared for execution. 2025-09-23 07:11:02.089119 | orchestrator | 2025-09-23 07:11:02 | INFO  | It takes a moment until task 238b5f71-eb4b-498a-8655-a167e341b295 (known-hosts) has been started and output is visible here. 2025-09-23 07:11:18.661766 | orchestrator | 2025-09-23 07:11:18.661883 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-09-23 07:11:18.661900 | orchestrator | 2025-09-23 07:11:18.661912 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-09-23 07:11:18.661925 | orchestrator | Tuesday 23 September 2025 07:11:05 +0000 (0:00:00.167) 0:00:00.167 ***** 2025-09-23 07:11:18.661937 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-23 07:11:18.661948 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-23 07:11:18.661959 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-23 07:11:18.661970 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-23 07:11:18.661981 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-23 07:11:18.661991 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-23 07:11:18.662002 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-23 07:11:18.662072 | orchestrator | 2025-09-23 07:11:18.662086 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-09-23 07:11:18.662121 | orchestrator | Tuesday 23 September 2025 07:11:11 +0000 (0:00:06.187) 0:00:06.355 ***** 2025-09-23 07:11:18.662144 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-23 07:11:18.662157 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-23 07:11:18.662168 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-23 07:11:18.662179 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-23 07:11:18.662190 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-23 07:11:18.662202 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-23 07:11:18.662212 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-23 07:11:18.662223 | orchestrator | 2025-09-23 07:11:18.662234 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-23 07:11:18.662245 | orchestrator | Tuesday 23 September 2025 07:11:12 +0000 (0:00:00.184) 0:00:06.539 ***** 2025-09-23 07:11:18.662256 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG8EmUf4hwZQE5qZzhgPkIPxOdaswtZUw+HBBL4MLN3F) 2025-09-23 07:11:18.662274 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/Gbitmyg950toZlglItX00C01A7DU5Q7uZJWLMajixQnSN1KWuNoAQnaPWqOxDdP7TCPi2kwv1/t9iNGrtn2jF4BN6uGak8Qi5Rvb3IkWcPhBwbCfL6aRPJZjRmc+jTvhF5yRyUqvgr5ommmu44glYHoZ8X6a5FzJ263gb+9TjJcQQqAFXHyR5A0Rtj4d8AiPCOTM4XqVbms5yWhp3sxYvUUJ+PWcDJrolmgfWLcDKKjXBAI1HQOStxtzla4FFnrzvgya53CeXf8wEVvEBS7WxGXHqZyTaenEkkJvzIgHUUywKLDKs/hP/stU03DvnICwaX0uP7eBTm2RLfMlgZEb2sIkeo0SV5Jn/+2ANZLh5V8JYEJ7PbKn84qGDXs/1dSoyM3pqIRRwob8nkbn3z8en4xxy+tf9663wro1WNEMi/69XVl/64gDuMtEIt1ztCUlJm928LX8qee4KREq3vSIimPQrmvRriTLl6xRMYxAs8ZPE3BBrGHBsmpgvMpIJhs=) 2025-09-23 07:11:18.662289 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKXk5X+OPkDp5l2P1LUq32xh5VZORlmmbjd7oSFjlpcGgcsR2h1X11TNcvSD92S3IGsQYetKdml7X+0umXesk2Q=) 2025-09-23 07:11:18.662303 | orchestrator | 2025-09-23 07:11:18.662315 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-23 07:11:18.662328 | orchestrator | Tuesday 23 September 2025 07:11:13 +0000 (0:00:01.216) 0:00:07.755 ***** 2025-09-23 07:11:18.662365 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDdDFTLyrY6av/5CiRempe9YDSaUicegva9AHiNNLAt/H/obqy3jTogdTm3tMIhRjcoMmpVpuOcJOMAliovPz5z90HAJByTpm0hkvEFlD0KWZol3t75dJpFj4xvD4aJATRqkzlK3M1REnZfBFuBSH7/IDCmP2Sdo7gB4rsDZGBRbiDumAD+9d+qB4oLCk18mz114qjvu4v6eKrVATkllgs1xHVOSC6Tn8rkKNol5rJG/szwVAbvSVqpLTQwnk9ywBt/R0TMmJD1id+Wi9f8EZOhteIuTKyiKq2lPmva0v/ArnRg97TTpl+SIQiIwF0rXKm39mmX807/NEJdz3Xr7q4FcfLqwhLNhAryKqLfBH9RXEUbl93nEpzkaWTYIYh8rF8gf4+5hnF3CzTOI2L94Uoh2iHBqh2uPNMTwYjpa1VlxUXH8UuIlqE97xSHBQm1smVtWDejmRgmu6z25WIv6Pv4aRondzI/giM+WrD9zkGdwiRGE/wJT9dAWb2rHRi1w7c=) 2025-09-23 07:11:18.662379 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBcYMY4MkByZ+1rVt7bE7XKOIVC5eqvuEYe9GFRX0kHaJJRYyRujKmNWen/5eWTU8ca7GZpx0iPxPPW2U/Y7cXI=) 2025-09-23 07:11:18.662400 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPYr8tiUcs0KFa00NDwo31qpUmjVqksdCAXS4s4LOKd5) 2025-09-23 07:11:18.662416 | orchestrator | 2025-09-23 07:11:18.662435 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-23 07:11:18.662457 | orchestrator | Tuesday 23 September 2025 07:11:14 +0000 (0:00:01.105) 0:00:08.861 ***** 2025-09-23 07:11:18.662476 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDCtrNFcYamD0nuBusJpKMZrV6FA3JvkBxHiRW67BOVvlpk13PIpQPnZUsQ9ahH9Zw6KFozNROLRZk8WiFdp090=) 2025-09-23 07:11:18.662490 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnaMWRsEejimsDqnDBrOpc5h4UdBwh32w0WeC3u29hz0XpsxIOkHDKtuUMynoslTeV4Pm5vAVVNVxmRnQCk8R0Abj9/7HXN4rr4ImMZHSuqqd7KcPf/Pwn0q6xYeXbmcTBDfuMFEGorHVlpA0/ZntUum067eVpkloqvACDx4qobyfvp5RomG5z8HWqbhn9C2V7jAWpL9rhD7+mR/Y/SIhe+g4RCoqSE79tSGPvyiIC6rWTpIQ49gsgP3GNCNlUlGMKP2eslN6IVJbGj/1cgeVDLXBQHkLOL7bGJYDFn4ABHqr/JL5Ey+KcedqJvuGCfGVfvhmqJ1jq12OjNRcX4a0PPMwyaEliznde/uERhUTeAhN4ErgBX/8b2eykPN1ZtiDRW5Bp1VyCW2GOZPgroBqWdQ/TQKlPzsgSAulgy2P+xglZc9tpdP5wVjzntuf/Mu2e0HxVqAkSxpKuhEQlLDOvVblUQN7yq2iiIgb7nLQ0hxJYli3VuNjc4RXjWQ2fSW0=) 2025-09-23 07:11:18.662595 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILNOWNy8CLjzaoBZ2b9BCvphgN5929cZxbOItXjKhuKX) 2025-09-23 07:11:18.662609 | orchestrator | 2025-09-23 07:11:18.662622 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-23 07:11:18.662634 | orchestrator | Tuesday 23 September 2025 07:11:15 +0000 (0:00:01.101) 0:00:09.963 ***** 2025-09-23 07:11:18.662650 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC00Ksu9pU1vzxzqMnTummGViwytvXo+lth1SV7Nvu3Vxo4GlD597FsIVwXiRVy4nJTSVi6N8YefE3OVWqOwqcz1bg2tEy0vaVLS1Rx9AOWi471iFk7JIx4FRzs3KWP9Lj743N/oCDqrGoEpNqvt1IM8JggUTOnp8nD42Hrx9XV122y6GJs1YE8dfWzVsKxroK7o557gon8CuO7FD9unfHShgZkZ5WBH3hiwgg359DqF3Ak6VLdL23PHxYhJPcVhDQS5KTP0zxJr8rFcQW8ODNEfvk4CJGVpltv0eDF3WxHPTxRr1wL2V2ShwLPIIKCPHMnNVlfLah/anEJNGljx+7joVF7gRvgAh3Ybo220GUdy1HTkxnkAlgetsHOR9KE9f3/lHREqYxAjHy61OpcBZ3LqifbS8e1aCpene+nOr1hRfV+aWdvYC1ufuUsW/6OqRc5aG2bI8kbBaqzdloYSLETVryUrMG+jtx0qrAqeRX1lMnQSMGN5ezexY195xrzP/E=) 2025-09-23 07:11:18.662661 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE32N23nwVIn9bh6hd+HwZB6V85YTcrzmFKr8olK8381RVa/RYt8dfHbWd+mAPpfqRbXFDWlk4xALJy1jzoEhEo=) 2025-09-23 07:11:18.662672 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOx5DECBBT4RyGhBkhFoX5q7cTGaYnr66q6p9GPEcphd) 2025-09-23 07:11:18.662683 | orchestrator | 2025-09-23 07:11:18.662694 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-23 07:11:18.662704 | orchestrator | Tuesday 23 September 2025 07:11:16 +0000 (0:00:01.074) 0:00:11.038 ***** 2025-09-23 07:11:18.662715 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDSR7uiioZ+/CYK5+yO4YpyTKQKD4xIA1uf533h663yZf+Ru8s0wxRhtruH8ej660EscUKVHjsHrBAL+XzCNesQ7ZkBvTmil8YdkbogqjrTJGftDqa/jZkF5xnM0SUiJumRt0yq6PDaHffiWc+i3d7C2hXkxgoE2AaeqJ4BezaXrogkk5BLcieomr3x9PdEZagxWrWuIVkFpx4XbgV/PBExr1XxiZf9V/JqU6J0QL2ox9zZ4wYtgm7xykRw8HiGVWOAaCUEKhZSSOuHE+lkbC9dZDQO62ErexWV+O2R3x6hMsjPPfvf5MkKmiEAUKxvwLnc2Ur+D/pc4dwpSMWht3TEid+7FR8Nx5BcajcZVr2GvMWkd4+/yTDger5mpp3bazbcco95Y3H07rrqwYc9oONGrcIIzXCjlX+3KF91xQ+8aPxBz0kZoDmq4iPAqMOj7+JDeCiccXdcBWxAjDjdVO9f+keBwJWctMsDlQ7XewwwNTbwrHr3uDU+eTlrtG5k5rk=) 2025-09-23 07:11:18.662727 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHZXfpmQ95NWtq73ly3ttc7VGNBP5kYHHELxB4K0OGlXmRTPFIFdCZU0kgBac8LBO3GFPndyZSqQha2ZDcAHiek=) 2025-09-23 07:11:18.662744 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPJXFpmiC8QLQlzRkh9/r7hRLiGxPy6AEAOwQaMKMDvQ) 2025-09-23 07:11:18.662755 | orchestrator | 2025-09-23 07:11:18.662766 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-23 07:11:18.662777 | orchestrator | Tuesday 23 September 2025 07:11:17 +0000 (0:00:01.072) 0:00:12.110 ***** 2025-09-23 07:11:18.662796 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF2B9LxNA6JvekkfQ+CzZc4kYLqIK31CWtdoboQjqbPt) 2025-09-23 07:11:29.690683 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9NIXCTQClanWERerAuO9YKzS4slweVGT5lf3L1DH+diBoUhyo6um2r+qviskse/KrTMbQZxpoW/plleP0aW3PPftxWeWix49904/dPBam5lRFVULR++QG7ok2xqjrnwhB3IjFt47mDYtQfy4v59lS2Ge9KvWoisogqEY8BtZKGTIJLqsxFt3OidflEYdbrYKR0H1It6z3loH3H1HKwmWBQkk4709+eSq0alIw/kxEXO5yZ51vZU7DUbTXkkuI/+hZiQSGv6tVfiwtlPZUSnO7xZwT/yqVKbUfEYjKAn3h4wuFC3OIivlV1LmErNHHlCCfH//4A72p1SReZsi9IenjQV4EpyKLeENYWc1mGalCknev6eYShFHFIEZUp4JnOyPNK+yhSMHZMH+88QOVvuIM/+JRubJQEWlrnyu4RENjV1cIb+nrviyGzVEpvstzP+Ojb9t2/EYcnZvuB+mS0+U523TWzhr7OlRgS7vqE5pwX+CvhvZuyaJL227ux/ybAjU=) 2025-09-23 07:11:29.690816 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGILCx/OUURsYM2ef7hgX9DKrHXFEKBCRnZ5UO8PMt7tVLlP6zmVJKa4tGv3vmsNKAY6DzgLmxVXK7Gg7nZfYLo=) 2025-09-23 07:11:29.690836 | orchestrator | 2025-09-23 07:11:29.690893 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-23 07:11:29.690908 | orchestrator | Tuesday 23 September 2025 07:11:18 +0000 (0:00:01.025) 0:00:13.136 ***** 2025-09-23 07:11:29.690920 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJhkpqesDbc3psSFEuG0d1qyDjzJxBob4+pUrFmZXGRx) 2025-09-23 07:11:29.690933 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDSgD93WX10Qlvkv6qxnDZHF/HC93R1mr598KzAAbImuWA/G8XZkV9VoXuFg1Dvq1rfE407XMrOjdRK05tvLWSRvejLIyLT1mK+R8jrEzUgr0x8GJ81jcOzagRt8F9PsuZfbftLqp/Cc5FBiXmhV4xKtiSYUfWHXRp0pSLX+lhyPL7Q1LCGHNvx3Ddh8g968o1WdSVhRHIAfF8zd2MxIUBmMqMDAIdIIjmG/RJbXXP0bf9AAuUKt+kyN7T6/2I0+J5sNkVWDtxcIW9zIUtOJmnVqbZG3YH9xoNzQKzjvrDjrHzNHiMF8tGjr75KHBKUsaHynNIkFzsOB6Hwicx/lEiAx/nok5FFfnRc3F7CMY0hMZq4FD1z8M9ArjCrY07a8djyYk8LBvOhv8zLwQC+OaR2u1enWEUYBWoLJob9EmRKeBX2DaRjSWQTaAQQfgT8cxuTEJvpuPoVNjqGapd+zsZLfnYpAGtO+qBzQeTbj827GlU7eBNQPy7bK5/ZZjjiUzM=) 2025-09-23 07:11:29.690946 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFOXWlorQX6kdHAYqWC169whIlC8152Q9Xyjd1UutS+romfCNzIuMYQM1xrGdYScgb02KlMhhTjd0U+senlf6Qs=) 2025-09-23 07:11:29.690958 | orchestrator | 2025-09-23 07:11:29.690969 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-09-23 07:11:29.690981 | orchestrator | Tuesday 23 September 2025 07:11:19 +0000 (0:00:01.129) 0:00:14.265 ***** 2025-09-23 07:11:29.690993 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-23 07:11:29.691005 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-23 07:11:29.691015 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-23 07:11:29.691026 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-23 07:11:29.691037 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-23 07:11:29.691067 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-23 07:11:29.691107 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-23 07:11:29.691132 | orchestrator | 2025-09-23 07:11:29.691144 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-09-23 07:11:29.691156 | orchestrator | Tuesday 23 September 2025 07:11:25 +0000 (0:00:05.289) 0:00:19.555 ***** 2025-09-23 07:11:29.691168 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-23 07:11:29.691209 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-23 07:11:29.691222 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-23 07:11:29.691236 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-23 07:11:29.691248 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-23 07:11:29.691261 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-23 07:11:29.691273 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-23 07:11:29.691288 | orchestrator | 2025-09-23 07:11:29.691328 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-23 07:11:29.691350 | orchestrator | Tuesday 23 September 2025 07:11:25 +0000 (0:00:00.220) 0:00:19.775 ***** 2025-09-23 07:11:29.691385 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG8EmUf4hwZQE5qZzhgPkIPxOdaswtZUw+HBBL4MLN3F) 2025-09-23 07:11:29.691404 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/Gbitmyg950toZlglItX00C01A7DU5Q7uZJWLMajixQnSN1KWuNoAQnaPWqOxDdP7TCPi2kwv1/t9iNGrtn2jF4BN6uGak8Qi5Rvb3IkWcPhBwbCfL6aRPJZjRmc+jTvhF5yRyUqvgr5ommmu44glYHoZ8X6a5FzJ263gb+9TjJcQQqAFXHyR5A0Rtj4d8AiPCOTM4XqVbms5yWhp3sxYvUUJ+PWcDJrolmgfWLcDKKjXBAI1HQOStxtzla4FFnrzvgya53CeXf8wEVvEBS7WxGXHqZyTaenEkkJvzIgHUUywKLDKs/hP/stU03DvnICwaX0uP7eBTm2RLfMlgZEb2sIkeo0SV5Jn/+2ANZLh5V8JYEJ7PbKn84qGDXs/1dSoyM3pqIRRwob8nkbn3z8en4xxy+tf9663wro1WNEMi/69XVl/64gDuMtEIt1ztCUlJm928LX8qee4KREq3vSIimPQrmvRriTLl6xRMYxAs8ZPE3BBrGHBsmpgvMpIJhs=) 2025-09-23 07:11:29.691424 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKXk5X+OPkDp5l2P1LUq32xh5VZORlmmbjd7oSFjlpcGgcsR2h1X11TNcvSD92S3IGsQYetKdml7X+0umXesk2Q=) 2025-09-23 07:11:29.691442 | orchestrator | 2025-09-23 07:11:29.691458 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-23 07:11:29.691477 | orchestrator | Tuesday 23 September 2025 07:11:26 +0000 (0:00:01.082) 0:00:20.858 ***** 2025-09-23 07:11:29.691495 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPYr8tiUcs0KFa00NDwo31qpUmjVqksdCAXS4s4LOKd5) 2025-09-23 07:11:29.691515 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDdDFTLyrY6av/5CiRempe9YDSaUicegva9AHiNNLAt/H/obqy3jTogdTm3tMIhRjcoMmpVpuOcJOMAliovPz5z90HAJByTpm0hkvEFlD0KWZol3t75dJpFj4xvD4aJATRqkzlK3M1REnZfBFuBSH7/IDCmP2Sdo7gB4rsDZGBRbiDumAD+9d+qB4oLCk18mz114qjvu4v6eKrVATkllgs1xHVOSC6Tn8rkKNol5rJG/szwVAbvSVqpLTQwnk9ywBt/R0TMmJD1id+Wi9f8EZOhteIuTKyiKq2lPmva0v/ArnRg97TTpl+SIQiIwF0rXKm39mmX807/NEJdz3Xr7q4FcfLqwhLNhAryKqLfBH9RXEUbl93nEpzkaWTYIYh8rF8gf4+5hnF3CzTOI2L94Uoh2iHBqh2uPNMTwYjpa1VlxUXH8UuIlqE97xSHBQm1smVtWDejmRgmu6z25WIv6Pv4aRondzI/giM+WrD9zkGdwiRGE/wJT9dAWb2rHRi1w7c=) 2025-09-23 07:11:29.691563 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBcYMY4MkByZ+1rVt7bE7XKOIVC5eqvuEYe9GFRX0kHaJJRYyRujKmNWen/5eWTU8ca7GZpx0iPxPPW2U/Y7cXI=) 2025-09-23 07:11:29.691581 | orchestrator | 2025-09-23 07:11:29.691615 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-23 07:11:29.691634 | orchestrator | Tuesday 23 September 2025 07:11:27 +0000 (0:00:01.106) 0:00:21.964 ***** 2025-09-23 07:11:29.691653 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnaMWRsEejimsDqnDBrOpc5h4UdBwh32w0WeC3u29hz0XpsxIOkHDKtuUMynoslTeV4Pm5vAVVNVxmRnQCk8R0Abj9/7HXN4rr4ImMZHSuqqd7KcPf/Pwn0q6xYeXbmcTBDfuMFEGorHVlpA0/ZntUum067eVpkloqvACDx4qobyfvp5RomG5z8HWqbhn9C2V7jAWpL9rhD7+mR/Y/SIhe+g4RCoqSE79tSGPvyiIC6rWTpIQ49gsgP3GNCNlUlGMKP2eslN6IVJbGj/1cgeVDLXBQHkLOL7bGJYDFn4ABHqr/JL5Ey+KcedqJvuGCfGVfvhmqJ1jq12OjNRcX4a0PPMwyaEliznde/uERhUTeAhN4ErgBX/8b2eykPN1ZtiDRW5Bp1VyCW2GOZPgroBqWdQ/TQKlPzsgSAulgy2P+xglZc9tpdP5wVjzntuf/Mu2e0HxVqAkSxpKuhEQlLDOvVblUQN7yq2iiIgb7nLQ0hxJYli3VuNjc4RXjWQ2fSW0=) 2025-09-23 07:11:29.691673 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDCtrNFcYamD0nuBusJpKMZrV6FA3JvkBxHiRW67BOVvlpk13PIpQPnZUsQ9ahH9Zw6KFozNROLRZk8WiFdp090=) 2025-09-23 07:11:29.691692 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILNOWNy8CLjzaoBZ2b9BCvphgN5929cZxbOItXjKhuKX) 2025-09-23 07:11:29.691710 | orchestrator | 2025-09-23 07:11:29.691729 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-23 07:11:29.691745 | orchestrator | Tuesday 23 September 2025 07:11:28 +0000 (0:00:01.125) 0:00:23.090 ***** 2025-09-23 07:11:29.691795 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC00Ksu9pU1vzxzqMnTummGViwytvXo+lth1SV7Nvu3Vxo4GlD597FsIVwXiRVy4nJTSVi6N8YefE3OVWqOwqcz1bg2tEy0vaVLS1Rx9AOWi471iFk7JIx4FRzs3KWP9Lj743N/oCDqrGoEpNqvt1IM8JggUTOnp8nD42Hrx9XV122y6GJs1YE8dfWzVsKxroK7o557gon8CuO7FD9unfHShgZkZ5WBH3hiwgg359DqF3Ak6VLdL23PHxYhJPcVhDQS5KTP0zxJr8rFcQW8ODNEfvk4CJGVpltv0eDF3WxHPTxRr1wL2V2ShwLPIIKCPHMnNVlfLah/anEJNGljx+7joVF7gRvgAh3Ybo220GUdy1HTkxnkAlgetsHOR9KE9f3/lHREqYxAjHy61OpcBZ3LqifbS8e1aCpene+nOr1hRfV+aWdvYC1ufuUsW/6OqRc5aG2bI8kbBaqzdloYSLETVryUrMG+jtx0qrAqeRX1lMnQSMGN5ezexY195xrzP/E=) 2025-09-23 07:11:34.023988 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE32N23nwVIn9bh6hd+HwZB6V85YTcrzmFKr8olK8381RVa/RYt8dfHbWd+mAPpfqRbXFDWlk4xALJy1jzoEhEo=) 2025-09-23 07:11:34.024060 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOx5DECBBT4RyGhBkhFoX5q7cTGaYnr66q6p9GPEcphd) 2025-09-23 07:11:34.024066 | orchestrator | 2025-09-23 07:11:34.024072 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-23 07:11:34.024077 | orchestrator | Tuesday 23 September 2025 07:11:29 +0000 (0:00:01.074) 0:00:24.165 ***** 2025-09-23 07:11:34.024083 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDSR7uiioZ+/CYK5+yO4YpyTKQKD4xIA1uf533h663yZf+Ru8s0wxRhtruH8ej660EscUKVHjsHrBAL+XzCNesQ7ZkBvTmil8YdkbogqjrTJGftDqa/jZkF5xnM0SUiJumRt0yq6PDaHffiWc+i3d7C2hXkxgoE2AaeqJ4BezaXrogkk5BLcieomr3x9PdEZagxWrWuIVkFpx4XbgV/PBExr1XxiZf9V/JqU6J0QL2ox9zZ4wYtgm7xykRw8HiGVWOAaCUEKhZSSOuHE+lkbC9dZDQO62ErexWV+O2R3x6hMsjPPfvf5MkKmiEAUKxvwLnc2Ur+D/pc4dwpSMWht3TEid+7FR8Nx5BcajcZVr2GvMWkd4+/yTDger5mpp3bazbcco95Y3H07rrqwYc9oONGrcIIzXCjlX+3KF91xQ+8aPxBz0kZoDmq4iPAqMOj7+JDeCiccXdcBWxAjDjdVO9f+keBwJWctMsDlQ7XewwwNTbwrHr3uDU+eTlrtG5k5rk=) 2025-09-23 07:11:34.024088 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHZXfpmQ95NWtq73ly3ttc7VGNBP5kYHHELxB4K0OGlXmRTPFIFdCZU0kgBac8LBO3GFPndyZSqQha2ZDcAHiek=) 2025-09-23 07:11:34.024092 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPJXFpmiC8QLQlzRkh9/r7hRLiGxPy6AEAOwQaMKMDvQ) 2025-09-23 07:11:34.024096 | orchestrator | 2025-09-23 07:11:34.024100 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-23 07:11:34.024116 | orchestrator | Tuesday 23 September 2025 07:11:30 +0000 (0:00:01.134) 0:00:25.299 ***** 2025-09-23 07:11:34.024137 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGILCx/OUURsYM2ef7hgX9DKrHXFEKBCRnZ5UO8PMt7tVLlP6zmVJKa4tGv3vmsNKAY6DzgLmxVXK7Gg7nZfYLo=) 2025-09-23 07:11:34.024141 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9NIXCTQClanWERerAuO9YKzS4slweVGT5lf3L1DH+diBoUhyo6um2r+qviskse/KrTMbQZxpoW/plleP0aW3PPftxWeWix49904/dPBam5lRFVULR++QG7ok2xqjrnwhB3IjFt47mDYtQfy4v59lS2Ge9KvWoisogqEY8BtZKGTIJLqsxFt3OidflEYdbrYKR0H1It6z3loH3H1HKwmWBQkk4709+eSq0alIw/kxEXO5yZ51vZU7DUbTXkkuI/+hZiQSGv6tVfiwtlPZUSnO7xZwT/yqVKbUfEYjKAn3h4wuFC3OIivlV1LmErNHHlCCfH//4A72p1SReZsi9IenjQV4EpyKLeENYWc1mGalCknev6eYShFHFIEZUp4JnOyPNK+yhSMHZMH+88QOVvuIM/+JRubJQEWlrnyu4RENjV1cIb+nrviyGzVEpvstzP+Ojb9t2/EYcnZvuB+mS0+U523TWzhr7OlRgS7vqE5pwX+CvhvZuyaJL227ux/ybAjU=) 2025-09-23 07:11:34.024146 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF2B9LxNA6JvekkfQ+CzZc4kYLqIK31CWtdoboQjqbPt) 2025-09-23 07:11:34.024149 | orchestrator | 2025-09-23 07:11:34.024153 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-23 07:11:34.024157 | orchestrator | Tuesday 23 September 2025 07:11:31 +0000 (0:00:01.078) 0:00:26.377 ***** 2025-09-23 07:11:34.024161 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJhkpqesDbc3psSFEuG0d1qyDjzJxBob4+pUrFmZXGRx) 2025-09-23 07:11:34.024165 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDSgD93WX10Qlvkv6qxnDZHF/HC93R1mr598KzAAbImuWA/G8XZkV9VoXuFg1Dvq1rfE407XMrOjdRK05tvLWSRvejLIyLT1mK+R8jrEzUgr0x8GJ81jcOzagRt8F9PsuZfbftLqp/Cc5FBiXmhV4xKtiSYUfWHXRp0pSLX+lhyPL7Q1LCGHNvx3Ddh8g968o1WdSVhRHIAfF8zd2MxIUBmMqMDAIdIIjmG/RJbXXP0bf9AAuUKt+kyN7T6/2I0+J5sNkVWDtxcIW9zIUtOJmnVqbZG3YH9xoNzQKzjvrDjrHzNHiMF8tGjr75KHBKUsaHynNIkFzsOB6Hwicx/lEiAx/nok5FFfnRc3F7CMY0hMZq4FD1z8M9ArjCrY07a8djyYk8LBvOhv8zLwQC+OaR2u1enWEUYBWoLJob9EmRKeBX2DaRjSWQTaAQQfgT8cxuTEJvpuPoVNjqGapd+zsZLfnYpAGtO+qBzQeTbj827GlU7eBNQPy7bK5/ZZjjiUzM=) 2025-09-23 07:11:34.024169 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFOXWlorQX6kdHAYqWC169whIlC8152Q9Xyjd1UutS+romfCNzIuMYQM1xrGdYScgb02KlMhhTjd0U+senlf6Qs=) 2025-09-23 07:11:34.024173 | orchestrator | 2025-09-23 07:11:34.024177 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-09-23 07:11:34.024180 | orchestrator | Tuesday 23 September 2025 07:11:32 +0000 (0:00:01.077) 0:00:27.455 ***** 2025-09-23 07:11:34.024185 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-23 07:11:34.024189 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-23 07:11:34.024203 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-23 07:11:34.024207 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-23 07:11:34.024211 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-23 07:11:34.024215 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-23 07:11:34.024218 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-23 07:11:34.024222 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:11:34.024226 | orchestrator | 2025-09-23 07:11:34.024230 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-09-23 07:11:34.024244 | orchestrator | Tuesday 23 September 2025 07:11:33 +0000 (0:00:00.150) 0:00:27.605 ***** 2025-09-23 07:11:34.024248 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:11:34.024251 | orchestrator | 2025-09-23 07:11:34.024255 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-09-23 07:11:34.024266 | orchestrator | Tuesday 23 September 2025 07:11:33 +0000 (0:00:00.053) 0:00:27.659 ***** 2025-09-23 07:11:34.024270 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:11:34.024273 | orchestrator | 2025-09-23 07:11:34.024281 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-09-23 07:11:34.024285 | orchestrator | Tuesday 23 September 2025 07:11:33 +0000 (0:00:00.060) 0:00:27.719 ***** 2025-09-23 07:11:34.024288 | orchestrator | changed: [testbed-manager] 2025-09-23 07:11:34.024292 | orchestrator | 2025-09-23 07:11:34.024296 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:11:34.024300 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-23 07:11:34.024305 | orchestrator | 2025-09-23 07:11:34.024309 | orchestrator | 2025-09-23 07:11:34.024313 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:11:34.024316 | orchestrator | Tuesday 23 September 2025 07:11:33 +0000 (0:00:00.525) 0:00:28.245 ***** 2025-09-23 07:11:34.024321 | orchestrator | =============================================================================== 2025-09-23 07:11:34.024324 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.19s 2025-09-23 07:11:34.024329 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.29s 2025-09-23 07:11:34.024333 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.22s 2025-09-23 07:11:34.024337 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-09-23 07:11:34.024341 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-09-23 07:11:34.024345 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-09-23 07:11:34.024350 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-09-23 07:11:34.024354 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-09-23 07:11:34.024358 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-09-23 07:11:34.024362 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-09-23 07:11:34.024369 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-09-23 07:11:34.024372 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-09-23 07:11:34.024376 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-09-23 07:11:34.024380 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-09-23 07:11:34.024384 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-09-23 07:11:34.024388 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-09-23 07:11:34.024391 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.53s 2025-09-23 07:11:34.024395 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.22s 2025-09-23 07:11:34.024399 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2025-09-23 07:11:34.024403 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.15s 2025-09-23 07:11:34.327588 | orchestrator | + osism apply squid 2025-09-23 07:11:46.355791 | orchestrator | 2025-09-23 07:11:46 | INFO  | Task b680f50e-ac66-4bda-b556-a73813206ccd (squid) was prepared for execution. 2025-09-23 07:11:46.355920 | orchestrator | 2025-09-23 07:11:46 | INFO  | It takes a moment until task b680f50e-ac66-4bda-b556-a73813206ccd (squid) has been started and output is visible here. 2025-09-23 07:13:40.657246 | orchestrator | 2025-09-23 07:13:40.657373 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-09-23 07:13:40.657390 | orchestrator | 2025-09-23 07:13:40.657411 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-09-23 07:13:40.657441 | orchestrator | Tuesday 23 September 2025 07:11:50 +0000 (0:00:00.168) 0:00:00.168 ***** 2025-09-23 07:13:40.657485 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-09-23 07:13:40.657523 | orchestrator | 2025-09-23 07:13:40.657533 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-09-23 07:13:40.657544 | orchestrator | Tuesday 23 September 2025 07:11:50 +0000 (0:00:00.089) 0:00:00.258 ***** 2025-09-23 07:13:40.657571 | orchestrator | ok: [testbed-manager] 2025-09-23 07:13:40.657582 | orchestrator | 2025-09-23 07:13:40.657591 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-09-23 07:13:40.657601 | orchestrator | Tuesday 23 September 2025 07:11:51 +0000 (0:00:01.427) 0:00:01.686 ***** 2025-09-23 07:13:40.657611 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-09-23 07:13:40.657621 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-09-23 07:13:40.657631 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-09-23 07:13:40.657641 | orchestrator | 2025-09-23 07:13:40.657650 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-09-23 07:13:40.657660 | orchestrator | Tuesday 23 September 2025 07:11:53 +0000 (0:00:01.164) 0:00:02.851 ***** 2025-09-23 07:13:40.657670 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-09-23 07:13:40.657680 | orchestrator | 2025-09-23 07:13:40.657689 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-09-23 07:13:40.657699 | orchestrator | Tuesday 23 September 2025 07:11:54 +0000 (0:00:01.090) 0:00:03.941 ***** 2025-09-23 07:13:40.657708 | orchestrator | ok: [testbed-manager] 2025-09-23 07:13:40.657718 | orchestrator | 2025-09-23 07:13:40.657727 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-09-23 07:13:40.657737 | orchestrator | Tuesday 23 September 2025 07:11:54 +0000 (0:00:00.360) 0:00:04.301 ***** 2025-09-23 07:13:40.657748 | orchestrator | changed: [testbed-manager] 2025-09-23 07:13:40.657759 | orchestrator | 2025-09-23 07:13:40.657770 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-09-23 07:13:40.657795 | orchestrator | Tuesday 23 September 2025 07:11:55 +0000 (0:00:00.905) 0:00:05.207 ***** 2025-09-23 07:13:40.657806 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-09-23 07:13:40.657819 | orchestrator | ok: [testbed-manager] 2025-09-23 07:13:40.657829 | orchestrator | 2025-09-23 07:13:40.657840 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-09-23 07:13:40.657851 | orchestrator | Tuesday 23 September 2025 07:12:27 +0000 (0:00:32.017) 0:00:37.224 ***** 2025-09-23 07:13:40.657862 | orchestrator | changed: [testbed-manager] 2025-09-23 07:13:40.657873 | orchestrator | 2025-09-23 07:13:40.657883 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-09-23 07:13:40.657895 | orchestrator | Tuesday 23 September 2025 07:12:39 +0000 (0:00:12.162) 0:00:49.387 ***** 2025-09-23 07:13:40.657906 | orchestrator | Pausing for 60 seconds 2025-09-23 07:13:40.657917 | orchestrator | changed: [testbed-manager] 2025-09-23 07:13:40.657928 | orchestrator | 2025-09-23 07:13:40.657939 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-09-23 07:13:40.657949 | orchestrator | Tuesday 23 September 2025 07:13:39 +0000 (0:01:00.079) 0:01:49.466 ***** 2025-09-23 07:13:40.657962 | orchestrator | ok: [testbed-manager] 2025-09-23 07:13:40.658001 | orchestrator | 2025-09-23 07:13:40.658088 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-09-23 07:13:40.658108 | orchestrator | Tuesday 23 September 2025 07:13:39 +0000 (0:00:00.060) 0:01:49.527 ***** 2025-09-23 07:13:40.658118 | orchestrator | changed: [testbed-manager] 2025-09-23 07:13:40.658128 | orchestrator | 2025-09-23 07:13:40.658137 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:13:40.658147 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:13:40.658156 | orchestrator | 2025-09-23 07:13:40.658165 | orchestrator | 2025-09-23 07:13:40.658186 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:13:40.658195 | orchestrator | Tuesday 23 September 2025 07:13:40 +0000 (0:00:00.680) 0:01:50.207 ***** 2025-09-23 07:13:40.658205 | orchestrator | =============================================================================== 2025-09-23 07:13:40.658214 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-09-23 07:13:40.658223 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.02s 2025-09-23 07:13:40.658233 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.16s 2025-09-23 07:13:40.658242 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.43s 2025-09-23 07:13:40.658302 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.16s 2025-09-23 07:13:40.658313 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.09s 2025-09-23 07:13:40.658322 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.91s 2025-09-23 07:13:40.658332 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.68s 2025-09-23 07:13:40.658341 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.36s 2025-09-23 07:13:40.658351 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2025-09-23 07:13:40.658360 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2025-09-23 07:13:40.928333 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-23 07:13:40.928561 | orchestrator | ++ semver latest 9.0.0 2025-09-23 07:13:40.980951 | orchestrator | + [[ -1 -lt 0 ]] 2025-09-23 07:13:40.981054 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-23 07:13:40.981692 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-09-23 07:13:53.055223 | orchestrator | 2025-09-23 07:13:53 | INFO  | Task c9a37678-a0ab-44ab-97a7-034515b210b9 (operator) was prepared for execution. 2025-09-23 07:13:53.055361 | orchestrator | 2025-09-23 07:13:53 | INFO  | It takes a moment until task c9a37678-a0ab-44ab-97a7-034515b210b9 (operator) has been started and output is visible here. 2025-09-23 07:14:08.771880 | orchestrator | 2025-09-23 07:14:08.771977 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-09-23 07:14:08.771990 | orchestrator | 2025-09-23 07:14:08.772000 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-23 07:14:08.772009 | orchestrator | Tuesday 23 September 2025 07:13:56 +0000 (0:00:00.147) 0:00:00.147 ***** 2025-09-23 07:14:08.772018 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:14:08.772028 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:14:08.772037 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:14:08.772045 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:14:08.772054 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:14:08.772063 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:14:08.772075 | orchestrator | 2025-09-23 07:14:08.772090 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-09-23 07:14:08.772105 | orchestrator | Tuesday 23 September 2025 07:14:00 +0000 (0:00:03.329) 0:00:03.477 ***** 2025-09-23 07:14:08.772119 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:14:08.772133 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:14:08.772147 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:14:08.772167 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:14:08.772186 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:14:08.772200 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:14:08.772214 | orchestrator | 2025-09-23 07:14:08.772234 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-09-23 07:14:08.772248 | orchestrator | 2025-09-23 07:14:08.772257 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-23 07:14:08.772266 | orchestrator | Tuesday 23 September 2025 07:14:01 +0000 (0:00:00.763) 0:00:04.241 ***** 2025-09-23 07:14:08.772274 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:14:08.772283 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:14:08.772292 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:14:08.772324 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:14:08.772334 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:14:08.772342 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:14:08.772351 | orchestrator | 2025-09-23 07:14:08.772359 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-23 07:14:08.772368 | orchestrator | Tuesday 23 September 2025 07:14:01 +0000 (0:00:00.163) 0:00:04.404 ***** 2025-09-23 07:14:08.772377 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:14:08.772385 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:14:08.772394 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:14:08.772402 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:14:08.772411 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:14:08.772419 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:14:08.772428 | orchestrator | 2025-09-23 07:14:08.772485 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-23 07:14:08.772497 | orchestrator | Tuesday 23 September 2025 07:14:01 +0000 (0:00:00.169) 0:00:04.573 ***** 2025-09-23 07:14:08.772507 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:14:08.772518 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:14:08.772529 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:14:08.772539 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:14:08.772553 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:14:08.772563 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:14:08.772574 | orchestrator | 2025-09-23 07:14:08.772584 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-23 07:14:08.772594 | orchestrator | Tuesday 23 September 2025 07:14:01 +0000 (0:00:00.583) 0:00:05.156 ***** 2025-09-23 07:14:08.772604 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:14:08.772614 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:14:08.772624 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:14:08.772634 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:14:08.772644 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:14:08.772653 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:14:08.772664 | orchestrator | 2025-09-23 07:14:08.772672 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-23 07:14:08.772681 | orchestrator | Tuesday 23 September 2025 07:14:02 +0000 (0:00:00.890) 0:00:06.047 ***** 2025-09-23 07:14:08.772690 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-09-23 07:14:08.772699 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-09-23 07:14:08.772708 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-09-23 07:14:08.772716 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-09-23 07:14:08.772725 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-09-23 07:14:08.772733 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-09-23 07:14:08.772742 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-09-23 07:14:08.772750 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-09-23 07:14:08.772759 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-09-23 07:14:08.772768 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-09-23 07:14:08.772776 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-09-23 07:14:08.772785 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-09-23 07:14:08.772793 | orchestrator | 2025-09-23 07:14:08.772802 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-23 07:14:08.772811 | orchestrator | Tuesday 23 September 2025 07:14:04 +0000 (0:00:01.235) 0:00:07.282 ***** 2025-09-23 07:14:08.772819 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:14:08.772828 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:14:08.772837 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:14:08.772845 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:14:08.772854 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:14:08.772862 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:14:08.772871 | orchestrator | 2025-09-23 07:14:08.772879 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-23 07:14:08.772896 | orchestrator | Tuesday 23 September 2025 07:14:05 +0000 (0:00:01.250) 0:00:08.533 ***** 2025-09-23 07:14:08.772904 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-09-23 07:14:08.772913 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-09-23 07:14:08.772922 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-09-23 07:14:08.772931 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-09-23 07:14:08.772956 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-09-23 07:14:08.772965 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-09-23 07:14:08.772973 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-09-23 07:14:08.772982 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-09-23 07:14:08.772990 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-09-23 07:14:08.772999 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-09-23 07:14:08.773007 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-09-23 07:14:08.773016 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-09-23 07:14:08.773024 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-09-23 07:14:08.773033 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-09-23 07:14:08.773041 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-09-23 07:14:08.773050 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-09-23 07:14:08.773058 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-09-23 07:14:08.773067 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-09-23 07:14:08.773075 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-09-23 07:14:08.773084 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-09-23 07:14:08.773092 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-09-23 07:14:08.773101 | orchestrator | 2025-09-23 07:14:08.773110 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-23 07:14:08.773119 | orchestrator | Tuesday 23 September 2025 07:14:06 +0000 (0:00:01.298) 0:00:09.832 ***** 2025-09-23 07:14:08.773127 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:14:08.773136 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:14:08.773144 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:14:08.773153 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:14:08.773161 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:14:08.773170 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:14:08.773178 | orchestrator | 2025-09-23 07:14:08.773187 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-23 07:14:08.773195 | orchestrator | Tuesday 23 September 2025 07:14:06 +0000 (0:00:00.141) 0:00:09.973 ***** 2025-09-23 07:14:08.773204 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:14:08.773212 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:14:08.773221 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:14:08.773233 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:14:08.773248 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:14:08.773262 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:14:08.773276 | orchestrator | 2025-09-23 07:14:08.773290 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-23 07:14:08.773305 | orchestrator | Tuesday 23 September 2025 07:14:07 +0000 (0:00:00.629) 0:00:10.603 ***** 2025-09-23 07:14:08.773319 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:14:08.773334 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:14:08.773344 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:14:08.773353 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:14:08.773370 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:14:08.773378 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:14:08.773387 | orchestrator | 2025-09-23 07:14:08.773396 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-23 07:14:08.773404 | orchestrator | Tuesday 23 September 2025 07:14:07 +0000 (0:00:00.172) 0:00:10.776 ***** 2025-09-23 07:14:08.773413 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-23 07:14:08.773421 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-23 07:14:08.773430 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:14:08.773461 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:14:08.773472 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-23 07:14:08.773480 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-23 07:14:08.773488 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:14:08.773497 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:14:08.773505 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-23 07:14:08.773514 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:14:08.773522 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-23 07:14:08.773530 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:14:08.773539 | orchestrator | 2025-09-23 07:14:08.773547 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-23 07:14:08.773556 | orchestrator | Tuesday 23 September 2025 07:14:08 +0000 (0:00:00.696) 0:00:11.472 ***** 2025-09-23 07:14:08.773564 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:14:08.773573 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:14:08.773582 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:14:08.773590 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:14:08.773598 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:14:08.773607 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:14:08.773615 | orchestrator | 2025-09-23 07:14:08.773624 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-23 07:14:08.773632 | orchestrator | Tuesday 23 September 2025 07:14:08 +0000 (0:00:00.161) 0:00:11.634 ***** 2025-09-23 07:14:08.773641 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:14:08.773649 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:14:08.773658 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:14:08.773666 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:14:08.773674 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:14:08.773683 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:14:08.773691 | orchestrator | 2025-09-23 07:14:08.773700 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-23 07:14:08.773708 | orchestrator | Tuesday 23 September 2025 07:14:08 +0000 (0:00:00.156) 0:00:11.791 ***** 2025-09-23 07:14:08.773717 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:14:08.773726 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:14:08.773734 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:14:08.773742 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:14:08.773758 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:14:09.873968 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:14:09.874118 | orchestrator | 2025-09-23 07:14:09.874135 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-23 07:14:09.874148 | orchestrator | Tuesday 23 September 2025 07:14:08 +0000 (0:00:00.160) 0:00:11.951 ***** 2025-09-23 07:14:09.874159 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:14:09.874170 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:14:09.874181 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:14:09.874192 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:14:09.874202 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:14:09.874213 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:14:09.874224 | orchestrator | 2025-09-23 07:14:09.874235 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-23 07:14:09.874246 | orchestrator | Tuesday 23 September 2025 07:14:09 +0000 (0:00:00.641) 0:00:12.593 ***** 2025-09-23 07:14:09.874284 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:14:09.874296 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:14:09.874307 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:14:09.874317 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:14:09.874328 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:14:09.874339 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:14:09.874349 | orchestrator | 2025-09-23 07:14:09.874360 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:14:09.874372 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-23 07:14:09.874385 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-23 07:14:09.874396 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-23 07:14:09.874407 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-23 07:14:09.874485 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-23 07:14:09.874498 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-23 07:14:09.874509 | orchestrator | 2025-09-23 07:14:09.874522 | orchestrator | 2025-09-23 07:14:09.874543 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:14:09.874556 | orchestrator | Tuesday 23 September 2025 07:14:09 +0000 (0:00:00.229) 0:00:12.823 ***** 2025-09-23 07:14:09.874569 | orchestrator | =============================================================================== 2025-09-23 07:14:09.874582 | orchestrator | Gathering Facts --------------------------------------------------------- 3.33s 2025-09-23 07:14:09.874594 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.30s 2025-09-23 07:14:09.874608 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.25s 2025-09-23 07:14:09.874620 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.24s 2025-09-23 07:14:09.874633 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.89s 2025-09-23 07:14:09.874645 | orchestrator | Do not require tty for all users ---------------------------------------- 0.76s 2025-09-23 07:14:09.874657 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.70s 2025-09-23 07:14:09.874669 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.64s 2025-09-23 07:14:09.874682 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.63s 2025-09-23 07:14:09.874694 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.58s 2025-09-23 07:14:09.874706 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.23s 2025-09-23 07:14:09.874719 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.17s 2025-09-23 07:14:09.874732 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.17s 2025-09-23 07:14:09.874745 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2025-09-23 07:14:09.874757 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.16s 2025-09-23 07:14:09.874770 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2025-09-23 07:14:09.874783 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2025-09-23 07:14:09.874796 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.14s 2025-09-23 07:14:10.149067 | orchestrator | + osism apply --environment custom facts 2025-09-23 07:14:12.007600 | orchestrator | 2025-09-23 07:14:12 | INFO  | Trying to run play facts in environment custom 2025-09-23 07:14:22.097626 | orchestrator | 2025-09-23 07:14:22 | INFO  | Task 93698c2a-214c-4876-b433-d230ebd4f9cf (facts) was prepared for execution. 2025-09-23 07:14:22.097733 | orchestrator | 2025-09-23 07:14:22 | INFO  | It takes a moment until task 93698c2a-214c-4876-b433-d230ebd4f9cf (facts) has been started and output is visible here. 2025-09-23 07:15:09.985682 | orchestrator | 2025-09-23 07:15:09.985774 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-09-23 07:15:09.985789 | orchestrator | 2025-09-23 07:15:09.985799 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-23 07:15:09.985809 | orchestrator | Tuesday 23 September 2025 07:14:25 +0000 (0:00:00.085) 0:00:00.085 ***** 2025-09-23 07:15:09.985819 | orchestrator | ok: [testbed-manager] 2025-09-23 07:15:09.985830 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:15:09.985840 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:15:09.985850 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:15:09.985860 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:15:09.985869 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:15:09.985879 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:15:09.985888 | orchestrator | 2025-09-23 07:15:09.985898 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-09-23 07:15:09.985908 | orchestrator | Tuesday 23 September 2025 07:14:27 +0000 (0:00:01.511) 0:00:01.597 ***** 2025-09-23 07:15:09.985918 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:15:09.985927 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:15:09.985937 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:15:09.985947 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:15:09.985956 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:15:09.985966 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:15:09.985975 | orchestrator | ok: [testbed-manager] 2025-09-23 07:15:09.985985 | orchestrator | 2025-09-23 07:15:09.985995 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-09-23 07:15:09.986004 | orchestrator | 2025-09-23 07:15:09.986014 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-23 07:15:09.986074 | orchestrator | Tuesday 23 September 2025 07:14:29 +0000 (0:00:01.880) 0:00:03.477 ***** 2025-09-23 07:15:09.986083 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:15:09.986093 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:15:09.986103 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:15:09.986112 | orchestrator | 2025-09-23 07:15:09.986122 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-23 07:15:09.986133 | orchestrator | Tuesday 23 September 2025 07:14:29 +0000 (0:00:00.114) 0:00:03.592 ***** 2025-09-23 07:15:09.986143 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:15:09.986152 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:15:09.986162 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:15:09.986171 | orchestrator | 2025-09-23 07:15:09.986181 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-23 07:15:09.986191 | orchestrator | Tuesday 23 September 2025 07:14:29 +0000 (0:00:00.220) 0:00:03.813 ***** 2025-09-23 07:15:09.986200 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:15:09.986211 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:15:09.986220 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:15:09.986230 | orchestrator | 2025-09-23 07:15:09.986240 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-23 07:15:09.986264 | orchestrator | Tuesday 23 September 2025 07:14:29 +0000 (0:00:00.214) 0:00:04.027 ***** 2025-09-23 07:15:09.986277 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:15:09.986289 | orchestrator | 2025-09-23 07:15:09.986301 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-23 07:15:09.986331 | orchestrator | Tuesday 23 September 2025 07:14:30 +0000 (0:00:00.147) 0:00:04.175 ***** 2025-09-23 07:15:09.986343 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:15:09.986353 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:15:09.986364 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:15:09.986375 | orchestrator | 2025-09-23 07:15:09.986430 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-23 07:15:09.986444 | orchestrator | Tuesday 23 September 2025 07:14:30 +0000 (0:00:00.433) 0:00:04.608 ***** 2025-09-23 07:15:09.986455 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:15:09.986466 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:15:09.986476 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:15:09.986485 | orchestrator | 2025-09-23 07:15:09.986494 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-23 07:15:09.986504 | orchestrator | Tuesday 23 September 2025 07:14:30 +0000 (0:00:00.124) 0:00:04.733 ***** 2025-09-23 07:15:09.986513 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:15:09.986523 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:15:09.986532 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:15:09.986542 | orchestrator | 2025-09-23 07:15:09.986551 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-23 07:15:09.986561 | orchestrator | Tuesday 23 September 2025 07:14:31 +0000 (0:00:01.094) 0:00:05.827 ***** 2025-09-23 07:15:09.986570 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:15:09.986580 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:15:09.986589 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:15:09.986599 | orchestrator | 2025-09-23 07:15:09.986608 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-23 07:15:09.986617 | orchestrator | Tuesday 23 September 2025 07:14:32 +0000 (0:00:00.460) 0:00:06.288 ***** 2025-09-23 07:15:09.986627 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:15:09.986636 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:15:09.986646 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:15:09.986655 | orchestrator | 2025-09-23 07:15:09.986665 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-23 07:15:09.986674 | orchestrator | Tuesday 23 September 2025 07:14:33 +0000 (0:00:01.047) 0:00:07.335 ***** 2025-09-23 07:15:09.986684 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:15:09.986693 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:15:09.986702 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:15:09.986712 | orchestrator | 2025-09-23 07:15:09.986721 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-09-23 07:15:09.986731 | orchestrator | Tuesday 23 September 2025 07:14:50 +0000 (0:00:17.778) 0:00:25.113 ***** 2025-09-23 07:15:09.986740 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:15:09.986749 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:15:09.986759 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:15:09.986768 | orchestrator | 2025-09-23 07:15:09.986778 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-09-23 07:15:09.986806 | orchestrator | Tuesday 23 September 2025 07:14:51 +0000 (0:00:00.117) 0:00:25.230 ***** 2025-09-23 07:15:09.986823 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:15:09.986839 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:15:09.986854 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:15:09.986869 | orchestrator | 2025-09-23 07:15:09.986885 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-23 07:15:09.986901 | orchestrator | Tuesday 23 September 2025 07:14:59 +0000 (0:00:08.532) 0:00:33.763 ***** 2025-09-23 07:15:09.986917 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:15:09.986935 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:15:09.986952 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:15:09.986968 | orchestrator | 2025-09-23 07:15:09.986979 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-23 07:15:09.986989 | orchestrator | Tuesday 23 September 2025 07:15:00 +0000 (0:00:00.458) 0:00:34.222 ***** 2025-09-23 07:15:09.987007 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-09-23 07:15:09.987017 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-09-23 07:15:09.987027 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-09-23 07:15:09.987036 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-09-23 07:15:09.987045 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-09-23 07:15:09.987055 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-09-23 07:15:09.987064 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-09-23 07:15:09.987073 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-09-23 07:15:09.987083 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-09-23 07:15:09.987092 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-09-23 07:15:09.987102 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-09-23 07:15:09.987111 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-09-23 07:15:09.987121 | orchestrator | 2025-09-23 07:15:09.987131 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-23 07:15:09.987148 | orchestrator | Tuesday 23 September 2025 07:15:03 +0000 (0:00:03.767) 0:00:37.989 ***** 2025-09-23 07:15:09.987163 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:15:09.987179 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:15:09.987195 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:15:09.987213 | orchestrator | 2025-09-23 07:15:09.987230 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-23 07:15:09.987247 | orchestrator | 2025-09-23 07:15:09.987258 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-23 07:15:09.987267 | orchestrator | Tuesday 23 September 2025 07:15:05 +0000 (0:00:01.214) 0:00:39.204 ***** 2025-09-23 07:15:09.987277 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:15:09.987286 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:15:09.987296 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:15:09.987305 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:15:09.987315 | orchestrator | ok: [testbed-manager] 2025-09-23 07:15:09.987324 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:15:09.987333 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:15:09.987342 | orchestrator | 2025-09-23 07:15:09.987352 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:15:09.987362 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:15:09.987372 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:15:09.987383 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:15:09.987457 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:15:09.987470 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 07:15:09.987479 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 07:15:09.987488 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 07:15:09.987498 | orchestrator | 2025-09-23 07:15:09.987507 | orchestrator | 2025-09-23 07:15:09.987517 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:15:09.987534 | orchestrator | Tuesday 23 September 2025 07:15:09 +0000 (0:00:04.880) 0:00:44.085 ***** 2025-09-23 07:15:09.987544 | orchestrator | =============================================================================== 2025-09-23 07:15:09.987553 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.78s 2025-09-23 07:15:09.987562 | orchestrator | Install required packages (Debian) -------------------------------------- 8.53s 2025-09-23 07:15:09.987572 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.88s 2025-09-23 07:15:09.987581 | orchestrator | Copy fact files --------------------------------------------------------- 3.77s 2025-09-23 07:15:09.987590 | orchestrator | Copy fact file ---------------------------------------------------------- 1.88s 2025-09-23 07:15:09.987600 | orchestrator | Create custom facts directory ------------------------------------------- 1.51s 2025-09-23 07:15:09.987618 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.21s 2025-09-23 07:15:10.117003 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.09s 2025-09-23 07:15:10.117060 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.05s 2025-09-23 07:15:10.117069 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2025-09-23 07:15:10.117076 | orchestrator | Create custom facts directory ------------------------------------------- 0.46s 2025-09-23 07:15:10.117084 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.43s 2025-09-23 07:15:10.117091 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.22s 2025-09-23 07:15:10.117099 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.21s 2025-09-23 07:15:10.117106 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2025-09-23 07:15:10.117115 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.12s 2025-09-23 07:15:10.117122 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.12s 2025-09-23 07:15:10.117129 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2025-09-23 07:15:10.309916 | orchestrator | + osism apply bootstrap 2025-09-23 07:15:22.210090 | orchestrator | 2025-09-23 07:15:22 | INFO  | Task f353e0cd-5e54-46df-8c87-de3160b51703 (bootstrap) was prepared for execution. 2025-09-23 07:15:22.210232 | orchestrator | 2025-09-23 07:15:22 | INFO  | It takes a moment until task f353e0cd-5e54-46df-8c87-de3160b51703 (bootstrap) has been started and output is visible here. 2025-09-23 07:15:38.046492 | orchestrator | 2025-09-23 07:15:38.046614 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-09-23 07:15:38.046631 | orchestrator | 2025-09-23 07:15:38.046643 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-09-23 07:15:38.046655 | orchestrator | Tuesday 23 September 2025 07:15:26 +0000 (0:00:00.163) 0:00:00.163 ***** 2025-09-23 07:15:38.046666 | orchestrator | ok: [testbed-manager] 2025-09-23 07:15:38.046679 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:15:38.046690 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:15:38.046702 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:15:38.046718 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:15:38.046737 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:15:38.046753 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:15:38.046772 | orchestrator | 2025-09-23 07:15:38.046790 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-23 07:15:38.046808 | orchestrator | 2025-09-23 07:15:38.046846 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-23 07:15:38.046867 | orchestrator | Tuesday 23 September 2025 07:15:26 +0000 (0:00:00.237) 0:00:00.400 ***** 2025-09-23 07:15:38.046885 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:15:38.046903 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:15:38.046922 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:15:38.046941 | orchestrator | ok: [testbed-manager] 2025-09-23 07:15:38.046986 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:15:38.047000 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:15:38.047013 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:15:38.047025 | orchestrator | 2025-09-23 07:15:38.047038 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-09-23 07:15:38.047054 | orchestrator | 2025-09-23 07:15:38.047078 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-23 07:15:38.047105 | orchestrator | Tuesday 23 September 2025 07:15:30 +0000 (0:00:03.749) 0:00:04.149 ***** 2025-09-23 07:15:38.047123 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-23 07:15:38.047144 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-23 07:15:38.047164 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-09-23 07:15:38.047183 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-23 07:15:38.047196 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-23 07:15:38.047209 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-23 07:15:38.047222 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-09-23 07:15:38.047234 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-23 07:15:38.047247 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-23 07:15:38.047259 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-23 07:15:38.047272 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-23 07:15:38.047286 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-23 07:15:38.047298 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-23 07:15:38.047311 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-23 07:15:38.047323 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-23 07:15:38.047337 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-09-23 07:15:38.047348 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-23 07:15:38.047360 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-23 07:15:38.047406 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-23 07:15:38.047426 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-23 07:15:38.047445 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-09-23 07:15:38.047463 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-09-23 07:15:38.047475 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:15:38.047486 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-23 07:15:38.047496 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:15:38.047507 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-23 07:15:38.047518 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-09-23 07:15:38.047528 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-23 07:15:38.047539 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-09-23 07:15:38.047550 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-09-23 07:15:38.047560 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-23 07:15:38.047571 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-09-23 07:15:38.047586 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:15:38.047607 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-09-23 07:15:38.047633 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-23 07:15:38.047650 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-09-23 07:15:38.047668 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-23 07:15:38.047686 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:15:38.047702 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-23 07:15:38.047733 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-23 07:15:38.047749 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-23 07:15:38.047766 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-09-23 07:15:38.047783 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-23 07:15:38.047800 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-23 07:15:38.047817 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-09-23 07:15:38.047834 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-23 07:15:38.047852 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:15:38.047898 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-23 07:15:38.047916 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-09-23 07:15:38.047933 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-23 07:15:38.047950 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-09-23 07:15:38.047966 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:15:38.047983 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-09-23 07:15:38.047999 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-09-23 07:15:38.048016 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-09-23 07:15:38.048033 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:15:38.048052 | orchestrator | 2025-09-23 07:15:38.048070 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-09-23 07:15:38.048088 | orchestrator | 2025-09-23 07:15:38.048107 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-09-23 07:15:38.048124 | orchestrator | Tuesday 23 September 2025 07:15:30 +0000 (0:00:00.503) 0:00:04.653 ***** 2025-09-23 07:15:38.048142 | orchestrator | ok: [testbed-manager] 2025-09-23 07:15:38.048160 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:15:38.048177 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:15:38.048196 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:15:38.048214 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:15:38.048231 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:15:38.048249 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:15:38.048267 | orchestrator | 2025-09-23 07:15:38.048286 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-09-23 07:15:38.048304 | orchestrator | Tuesday 23 September 2025 07:15:32 +0000 (0:00:01.193) 0:00:05.847 ***** 2025-09-23 07:15:38.048323 | orchestrator | ok: [testbed-manager] 2025-09-23 07:15:38.048341 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:15:38.048359 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:15:38.048438 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:15:38.048463 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:15:38.048481 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:15:38.048499 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:15:38.048517 | orchestrator | 2025-09-23 07:15:38.048534 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-09-23 07:15:38.048553 | orchestrator | Tuesday 23 September 2025 07:15:33 +0000 (0:00:01.263) 0:00:07.110 ***** 2025-09-23 07:15:38.048572 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:15:38.048592 | orchestrator | 2025-09-23 07:15:38.048612 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-09-23 07:15:38.048630 | orchestrator | Tuesday 23 September 2025 07:15:33 +0000 (0:00:00.251) 0:00:07.361 ***** 2025-09-23 07:15:38.048648 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:15:38.048666 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:15:38.048677 | orchestrator | changed: [testbed-manager] 2025-09-23 07:15:38.048688 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:15:38.048698 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:15:38.048724 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:15:38.048735 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:15:38.048745 | orchestrator | 2025-09-23 07:15:38.048756 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-09-23 07:15:38.048767 | orchestrator | Tuesday 23 September 2025 07:15:35 +0000 (0:00:02.100) 0:00:09.462 ***** 2025-09-23 07:15:38.048778 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:15:38.048791 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:15:38.048804 | orchestrator | 2025-09-23 07:15:38.048815 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-09-23 07:15:38.048826 | orchestrator | Tuesday 23 September 2025 07:15:35 +0000 (0:00:00.281) 0:00:09.744 ***** 2025-09-23 07:15:38.048837 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:15:38.048847 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:15:38.048858 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:15:38.048868 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:15:38.048879 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:15:38.048889 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:15:38.048900 | orchestrator | 2025-09-23 07:15:38.048911 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-09-23 07:15:38.048921 | orchestrator | Tuesday 23 September 2025 07:15:36 +0000 (0:00:00.985) 0:00:10.729 ***** 2025-09-23 07:15:38.048932 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:15:38.048943 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:15:38.048954 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:15:38.048964 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:15:38.048974 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:15:38.048985 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:15:38.048995 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:15:38.049006 | orchestrator | 2025-09-23 07:15:38.049017 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-09-23 07:15:38.049027 | orchestrator | Tuesday 23 September 2025 07:15:37 +0000 (0:00:00.551) 0:00:11.281 ***** 2025-09-23 07:15:38.049038 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:15:38.049049 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:15:38.049059 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:15:38.049085 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:15:38.049104 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:15:38.049118 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:15:38.049133 | orchestrator | ok: [testbed-manager] 2025-09-23 07:15:38.049161 | orchestrator | 2025-09-23 07:15:38.049181 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-23 07:15:38.049201 | orchestrator | Tuesday 23 September 2025 07:15:37 +0000 (0:00:00.437) 0:00:11.718 ***** 2025-09-23 07:15:38.049221 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:15:38.049238 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:15:38.049274 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:15:50.179960 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:15:50.180083 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:15:50.180095 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:15:50.180103 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:15:50.180112 | orchestrator | 2025-09-23 07:15:50.180121 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-23 07:15:50.180132 | orchestrator | Tuesday 23 September 2025 07:15:38 +0000 (0:00:00.212) 0:00:11.931 ***** 2025-09-23 07:15:50.180142 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:15:50.180168 | orchestrator | 2025-09-23 07:15:50.180211 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-23 07:15:50.180221 | orchestrator | Tuesday 23 September 2025 07:15:38 +0000 (0:00:00.309) 0:00:12.241 ***** 2025-09-23 07:15:50.180230 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:15:50.180238 | orchestrator | 2025-09-23 07:15:50.180246 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-23 07:15:50.180253 | orchestrator | Tuesday 23 September 2025 07:15:38 +0000 (0:00:00.316) 0:00:12.557 ***** 2025-09-23 07:15:50.180262 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:15:50.180271 | orchestrator | ok: [testbed-manager] 2025-09-23 07:15:50.180279 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:15:50.180287 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:15:50.180294 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:15:50.180302 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:15:50.180309 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:15:50.180317 | orchestrator | 2025-09-23 07:15:50.180325 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-23 07:15:50.180333 | orchestrator | Tuesday 23 September 2025 07:15:40 +0000 (0:00:01.486) 0:00:14.043 ***** 2025-09-23 07:15:50.180340 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:15:50.180348 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:15:50.180356 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:15:50.180364 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:15:50.180434 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:15:50.180446 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:15:50.180455 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:15:50.180464 | orchestrator | 2025-09-23 07:15:50.180473 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-23 07:15:50.180482 | orchestrator | Tuesday 23 September 2025 07:15:40 +0000 (0:00:00.224) 0:00:14.268 ***** 2025-09-23 07:15:50.180491 | orchestrator | ok: [testbed-manager] 2025-09-23 07:15:50.180500 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:15:50.180509 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:15:50.180518 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:15:50.180526 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:15:50.180535 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:15:50.180544 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:15:50.180553 | orchestrator | 2025-09-23 07:15:50.180562 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-23 07:15:50.180571 | orchestrator | Tuesday 23 September 2025 07:15:41 +0000 (0:00:00.627) 0:00:14.895 ***** 2025-09-23 07:15:50.180580 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:15:50.180589 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:15:50.180599 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:15:50.180608 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:15:50.180617 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:15:50.180626 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:15:50.180634 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:15:50.180643 | orchestrator | 2025-09-23 07:15:50.180652 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-23 07:15:50.180663 | orchestrator | Tuesday 23 September 2025 07:15:41 +0000 (0:00:00.230) 0:00:15.125 ***** 2025-09-23 07:15:50.180672 | orchestrator | ok: [testbed-manager] 2025-09-23 07:15:50.180681 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:15:50.180690 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:15:50.180699 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:15:50.180707 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:15:50.180716 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:15:50.180725 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:15:50.180734 | orchestrator | 2025-09-23 07:15:50.180743 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-23 07:15:50.180760 | orchestrator | Tuesday 23 September 2025 07:15:41 +0000 (0:00:00.545) 0:00:15.671 ***** 2025-09-23 07:15:50.180769 | orchestrator | ok: [testbed-manager] 2025-09-23 07:15:50.180778 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:15:50.180787 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:15:50.180797 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:15:50.180805 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:15:50.180812 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:15:50.180820 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:15:50.180828 | orchestrator | 2025-09-23 07:15:50.180836 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-23 07:15:50.180844 | orchestrator | Tuesday 23 September 2025 07:15:43 +0000 (0:00:01.176) 0:00:16.848 ***** 2025-09-23 07:15:50.180851 | orchestrator | ok: [testbed-manager] 2025-09-23 07:15:50.180859 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:15:50.180867 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:15:50.180875 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:15:50.180882 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:15:50.180890 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:15:50.180898 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:15:50.180905 | orchestrator | 2025-09-23 07:15:50.180913 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-23 07:15:50.180921 | orchestrator | Tuesday 23 September 2025 07:15:44 +0000 (0:00:01.162) 0:00:18.010 ***** 2025-09-23 07:15:50.180947 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:15:50.180956 | orchestrator | 2025-09-23 07:15:50.180963 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-23 07:15:50.180971 | orchestrator | Tuesday 23 September 2025 07:15:44 +0000 (0:00:00.407) 0:00:18.418 ***** 2025-09-23 07:15:50.180979 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:15:50.180987 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:15:50.180995 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:15:50.181002 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:15:50.181010 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:15:50.181018 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:15:50.181030 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:15:50.181038 | orchestrator | 2025-09-23 07:15:50.181046 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-23 07:15:50.181054 | orchestrator | Tuesday 23 September 2025 07:15:45 +0000 (0:00:01.172) 0:00:19.591 ***** 2025-09-23 07:15:50.181062 | orchestrator | ok: [testbed-manager] 2025-09-23 07:15:50.181070 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:15:50.181078 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:15:50.181085 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:15:50.181093 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:15:50.181101 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:15:50.181108 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:15:50.181116 | orchestrator | 2025-09-23 07:15:50.181124 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-23 07:15:50.181131 | orchestrator | Tuesday 23 September 2025 07:15:46 +0000 (0:00:00.247) 0:00:19.838 ***** 2025-09-23 07:15:50.181139 | orchestrator | ok: [testbed-manager] 2025-09-23 07:15:50.181147 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:15:50.181154 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:15:50.181162 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:15:50.181170 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:15:50.181177 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:15:50.181185 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:15:50.181193 | orchestrator | 2025-09-23 07:15:50.181200 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-23 07:15:50.181208 | orchestrator | Tuesday 23 September 2025 07:15:46 +0000 (0:00:00.248) 0:00:20.087 ***** 2025-09-23 07:15:50.181221 | orchestrator | ok: [testbed-manager] 2025-09-23 07:15:50.181229 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:15:50.181237 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:15:50.181244 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:15:50.181252 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:15:50.181259 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:15:50.181267 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:15:50.181274 | orchestrator | 2025-09-23 07:15:50.181282 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-23 07:15:50.181290 | orchestrator | Tuesday 23 September 2025 07:15:46 +0000 (0:00:00.229) 0:00:20.316 ***** 2025-09-23 07:15:50.181299 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:15:50.181309 | orchestrator | 2025-09-23 07:15:50.181317 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-23 07:15:50.181325 | orchestrator | Tuesday 23 September 2025 07:15:46 +0000 (0:00:00.289) 0:00:20.606 ***** 2025-09-23 07:15:50.181333 | orchestrator | ok: [testbed-manager] 2025-09-23 07:15:50.181341 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:15:50.181348 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:15:50.181356 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:15:50.181364 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:15:50.181384 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:15:50.181392 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:15:50.181400 | orchestrator | 2025-09-23 07:15:50.181408 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-23 07:15:50.181415 | orchestrator | Tuesday 23 September 2025 07:15:47 +0000 (0:00:00.485) 0:00:21.091 ***** 2025-09-23 07:15:50.181423 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:15:50.181431 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:15:50.181439 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:15:50.181446 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:15:50.181454 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:15:50.181462 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:15:50.181470 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:15:50.181477 | orchestrator | 2025-09-23 07:15:50.181485 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-23 07:15:50.181493 | orchestrator | Tuesday 23 September 2025 07:15:47 +0000 (0:00:00.259) 0:00:21.350 ***** 2025-09-23 07:15:50.181500 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:15:50.181508 | orchestrator | ok: [testbed-manager] 2025-09-23 07:15:50.181516 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:15:50.181523 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:15:50.181531 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:15:50.181539 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:15:50.181547 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:15:50.181554 | orchestrator | 2025-09-23 07:15:50.181562 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-23 07:15:50.181570 | orchestrator | Tuesday 23 September 2025 07:15:48 +0000 (0:00:00.968) 0:00:22.319 ***** 2025-09-23 07:15:50.181578 | orchestrator | ok: [testbed-manager] 2025-09-23 07:15:50.181585 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:15:50.181593 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:15:50.181601 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:15:50.181608 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:15:50.181616 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:15:50.181624 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:15:50.181631 | orchestrator | 2025-09-23 07:15:50.181639 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-23 07:15:50.181647 | orchestrator | Tuesday 23 September 2025 07:15:49 +0000 (0:00:00.523) 0:00:22.843 ***** 2025-09-23 07:15:50.181655 | orchestrator | ok: [testbed-manager] 2025-09-23 07:15:50.181669 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:15:50.181677 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:15:50.181684 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:15:50.181697 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:16:31.810830 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:16:31.810941 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:16:31.810954 | orchestrator | 2025-09-23 07:16:31.810963 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-23 07:16:31.810974 | orchestrator | Tuesday 23 September 2025 07:15:50 +0000 (0:00:01.123) 0:00:23.967 ***** 2025-09-23 07:16:31.810982 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:16:31.810991 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:16:31.810999 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:16:31.811007 | orchestrator | changed: [testbed-manager] 2025-09-23 07:16:31.811014 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:16:31.811022 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:16:31.811030 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:16:31.811052 | orchestrator | 2025-09-23 07:16:31.811061 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-09-23 07:16:31.811069 | orchestrator | Tuesday 23 September 2025 07:16:08 +0000 (0:00:17.871) 0:00:41.838 ***** 2025-09-23 07:16:31.811077 | orchestrator | ok: [testbed-manager] 2025-09-23 07:16:31.811086 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:16:31.811093 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:16:31.811101 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:16:31.811109 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:16:31.811117 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:16:31.811125 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:16:31.811133 | orchestrator | 2025-09-23 07:16:31.811141 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-09-23 07:16:31.811149 | orchestrator | Tuesday 23 September 2025 07:16:08 +0000 (0:00:00.271) 0:00:42.109 ***** 2025-09-23 07:16:31.811157 | orchestrator | ok: [testbed-manager] 2025-09-23 07:16:31.811165 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:16:31.811173 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:16:31.811181 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:16:31.811188 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:16:31.811196 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:16:31.811204 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:16:31.811212 | orchestrator | 2025-09-23 07:16:31.811220 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-09-23 07:16:31.811228 | orchestrator | Tuesday 23 September 2025 07:16:08 +0000 (0:00:00.213) 0:00:42.323 ***** 2025-09-23 07:16:31.811236 | orchestrator | ok: [testbed-manager] 2025-09-23 07:16:31.811244 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:16:31.811252 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:16:31.811260 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:16:31.811268 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:16:31.811276 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:16:31.811284 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:16:31.811292 | orchestrator | 2025-09-23 07:16:31.811300 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-09-23 07:16:31.811308 | orchestrator | Tuesday 23 September 2025 07:16:08 +0000 (0:00:00.243) 0:00:42.566 ***** 2025-09-23 07:16:31.811317 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:16:31.811327 | orchestrator | 2025-09-23 07:16:31.811335 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-09-23 07:16:31.811343 | orchestrator | Tuesday 23 September 2025 07:16:09 +0000 (0:00:00.288) 0:00:42.855 ***** 2025-09-23 07:16:31.811351 | orchestrator | ok: [testbed-manager] 2025-09-23 07:16:31.811389 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:16:31.811400 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:16:31.811432 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:16:31.811441 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:16:31.811450 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:16:31.811459 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:16:31.811468 | orchestrator | 2025-09-23 07:16:31.811477 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-09-23 07:16:31.811486 | orchestrator | Tuesday 23 September 2025 07:16:10 +0000 (0:00:01.666) 0:00:44.521 ***** 2025-09-23 07:16:31.811497 | orchestrator | changed: [testbed-manager] 2025-09-23 07:16:31.811511 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:16:31.811526 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:16:31.811542 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:16:31.811552 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:16:31.811561 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:16:31.811570 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:16:31.811579 | orchestrator | 2025-09-23 07:16:31.811588 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-09-23 07:16:31.811612 | orchestrator | Tuesday 23 September 2025 07:16:11 +0000 (0:00:01.168) 0:00:45.690 ***** 2025-09-23 07:16:31.811622 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:16:31.811631 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:16:31.811641 | orchestrator | ok: [testbed-manager] 2025-09-23 07:16:31.811649 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:16:31.811658 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:16:31.811667 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:16:31.811675 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:16:31.811685 | orchestrator | 2025-09-23 07:16:31.811694 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-09-23 07:16:31.811702 | orchestrator | Tuesday 23 September 2025 07:16:12 +0000 (0:00:00.800) 0:00:46.491 ***** 2025-09-23 07:16:31.811712 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:16:31.811723 | orchestrator | 2025-09-23 07:16:31.811732 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-09-23 07:16:31.811740 | orchestrator | Tuesday 23 September 2025 07:16:12 +0000 (0:00:00.301) 0:00:46.792 ***** 2025-09-23 07:16:31.811748 | orchestrator | changed: [testbed-manager] 2025-09-23 07:16:31.811756 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:16:31.811764 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:16:31.811772 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:16:31.811780 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:16:31.811788 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:16:31.811795 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:16:31.811803 | orchestrator | 2025-09-23 07:16:31.811826 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-09-23 07:16:31.811835 | orchestrator | Tuesday 23 September 2025 07:16:14 +0000 (0:00:01.040) 0:00:47.832 ***** 2025-09-23 07:16:31.811843 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:16:31.811850 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:16:31.811858 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:16:31.811866 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:16:31.811874 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:16:31.811881 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:16:31.811889 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:16:31.811897 | orchestrator | 2025-09-23 07:16:31.811905 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-09-23 07:16:31.811917 | orchestrator | Tuesday 23 September 2025 07:16:14 +0000 (0:00:00.300) 0:00:48.133 ***** 2025-09-23 07:16:31.811925 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:16:31.811932 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:16:31.811940 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:16:31.811948 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:16:31.811963 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:16:31.811971 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:16:31.811978 | orchestrator | changed: [testbed-manager] 2025-09-23 07:16:31.811986 | orchestrator | 2025-09-23 07:16:31.811994 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-09-23 07:16:31.812002 | orchestrator | Tuesday 23 September 2025 07:16:26 +0000 (0:00:12.337) 0:01:00.470 ***** 2025-09-23 07:16:31.812010 | orchestrator | ok: [testbed-manager] 2025-09-23 07:16:31.812018 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:16:31.812026 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:16:31.812034 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:16:31.812042 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:16:31.812050 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:16:31.812057 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:16:31.812065 | orchestrator | 2025-09-23 07:16:31.812073 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-09-23 07:16:31.812081 | orchestrator | Tuesday 23 September 2025 07:16:27 +0000 (0:00:01.010) 0:01:01.481 ***** 2025-09-23 07:16:31.812089 | orchestrator | ok: [testbed-manager] 2025-09-23 07:16:31.812097 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:16:31.812105 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:16:31.812112 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:16:31.812120 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:16:31.812128 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:16:31.812135 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:16:31.812143 | orchestrator | 2025-09-23 07:16:31.812151 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-09-23 07:16:31.812159 | orchestrator | Tuesday 23 September 2025 07:16:28 +0000 (0:00:00.914) 0:01:02.396 ***** 2025-09-23 07:16:31.812167 | orchestrator | ok: [testbed-manager] 2025-09-23 07:16:31.812175 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:16:31.812182 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:16:31.812190 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:16:31.812198 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:16:31.812206 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:16:31.812213 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:16:31.812221 | orchestrator | 2025-09-23 07:16:31.812229 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-09-23 07:16:31.812237 | orchestrator | Tuesday 23 September 2025 07:16:28 +0000 (0:00:00.217) 0:01:02.614 ***** 2025-09-23 07:16:31.812245 | orchestrator | ok: [testbed-manager] 2025-09-23 07:16:31.812253 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:16:31.812261 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:16:31.812268 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:16:31.812276 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:16:31.812284 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:16:31.812292 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:16:31.812299 | orchestrator | 2025-09-23 07:16:31.812307 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-09-23 07:16:31.812315 | orchestrator | Tuesday 23 September 2025 07:16:29 +0000 (0:00:00.219) 0:01:02.833 ***** 2025-09-23 07:16:31.812323 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:16:31.812331 | orchestrator | 2025-09-23 07:16:31.812339 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-09-23 07:16:31.812347 | orchestrator | Tuesday 23 September 2025 07:16:29 +0000 (0:00:00.275) 0:01:03.109 ***** 2025-09-23 07:16:31.812355 | orchestrator | ok: [testbed-manager] 2025-09-23 07:16:31.812380 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:16:31.812388 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:16:31.812396 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:16:31.812403 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:16:31.812411 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:16:31.812419 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:16:31.812432 | orchestrator | 2025-09-23 07:16:31.812440 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-09-23 07:16:31.812448 | orchestrator | Tuesday 23 September 2025 07:16:30 +0000 (0:00:01.688) 0:01:04.797 ***** 2025-09-23 07:16:31.812456 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:16:31.812464 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:16:31.812472 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:16:31.812479 | orchestrator | changed: [testbed-manager] 2025-09-23 07:16:31.812487 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:16:31.812495 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:16:31.812503 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:16:31.812510 | orchestrator | 2025-09-23 07:16:31.812518 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-09-23 07:16:31.812526 | orchestrator | Tuesday 23 September 2025 07:16:31 +0000 (0:00:00.546) 0:01:05.344 ***** 2025-09-23 07:16:31.812534 | orchestrator | ok: [testbed-manager] 2025-09-23 07:16:31.812542 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:16:31.812549 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:16:31.812557 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:16:31.812565 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:16:31.812573 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:16:31.812580 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:16:31.812588 | orchestrator | 2025-09-23 07:16:31.812601 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-09-23 07:18:56.236516 | orchestrator | Tuesday 23 September 2025 07:16:31 +0000 (0:00:00.256) 0:01:05.600 ***** 2025-09-23 07:18:56.236629 | orchestrator | ok: [testbed-manager] 2025-09-23 07:18:56.236646 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:18:56.236657 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:18:56.236668 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:18:56.236679 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:18:56.236690 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:18:56.236701 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:18:56.236712 | orchestrator | 2025-09-23 07:18:56.236724 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-09-23 07:18:56.236735 | orchestrator | Tuesday 23 September 2025 07:16:33 +0000 (0:00:01.207) 0:01:06.807 ***** 2025-09-23 07:18:56.236746 | orchestrator | changed: [testbed-manager] 2025-09-23 07:18:56.236758 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:18:56.236786 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:18:56.236797 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:18:56.236808 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:18:56.236819 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:18:56.236829 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:18:56.236840 | orchestrator | 2025-09-23 07:18:56.236853 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-09-23 07:18:56.236863 | orchestrator | Tuesday 23 September 2025 07:16:34 +0000 (0:00:01.746) 0:01:08.554 ***** 2025-09-23 07:18:56.236874 | orchestrator | ok: [testbed-manager] 2025-09-23 07:18:56.236886 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:18:56.236896 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:18:56.236907 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:18:56.236918 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:18:56.236928 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:18:56.236939 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:18:56.236950 | orchestrator | 2025-09-23 07:18:56.236961 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-09-23 07:18:56.236971 | orchestrator | Tuesday 23 September 2025 07:16:37 +0000 (0:00:02.466) 0:01:11.020 ***** 2025-09-23 07:18:56.236982 | orchestrator | ok: [testbed-manager] 2025-09-23 07:18:56.236993 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:18:56.237003 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:18:56.237014 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:18:56.237025 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:18:56.237041 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:18:56.237088 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:18:56.237106 | orchestrator | 2025-09-23 07:18:56.237124 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-09-23 07:18:56.237145 | orchestrator | Tuesday 23 September 2025 07:17:15 +0000 (0:00:38.488) 0:01:49.509 ***** 2025-09-23 07:18:56.237164 | orchestrator | changed: [testbed-manager] 2025-09-23 07:18:56.237184 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:18:56.237202 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:18:56.237215 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:18:56.237227 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:18:56.237239 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:18:56.237250 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:18:56.237262 | orchestrator | 2025-09-23 07:18:56.237274 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-09-23 07:18:56.237287 | orchestrator | Tuesday 23 September 2025 07:18:35 +0000 (0:01:19.627) 0:03:09.137 ***** 2025-09-23 07:18:56.237299 | orchestrator | ok: [testbed-manager] 2025-09-23 07:18:56.237311 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:18:56.237350 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:18:56.237362 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:18:56.237375 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:18:56.237387 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:18:56.237399 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:18:56.237411 | orchestrator | 2025-09-23 07:18:56.237423 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-09-23 07:18:56.237435 | orchestrator | Tuesday 23 September 2025 07:18:37 +0000 (0:00:02.123) 0:03:11.260 ***** 2025-09-23 07:18:56.237446 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:18:56.237456 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:18:56.237467 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:18:56.237477 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:18:56.237487 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:18:56.237498 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:18:56.237508 | orchestrator | changed: [testbed-manager] 2025-09-23 07:18:56.237519 | orchestrator | 2025-09-23 07:18:56.237530 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-09-23 07:18:56.237541 | orchestrator | Tuesday 23 September 2025 07:18:50 +0000 (0:00:13.034) 0:03:24.295 ***** 2025-09-23 07:18:56.237566 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-09-23 07:18:56.237594 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-09-23 07:18:56.237648 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-09-23 07:18:56.237681 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-09-23 07:18:56.237717 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-09-23 07:18:56.237730 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-09-23 07:18:56.237741 | orchestrator | 2025-09-23 07:18:56.237753 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-09-23 07:18:56.237763 | orchestrator | Tuesday 23 September 2025 07:18:50 +0000 (0:00:00.426) 0:03:24.721 ***** 2025-09-23 07:18:56.237774 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-23 07:18:56.237785 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:18:56.237797 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-23 07:18:56.237807 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-23 07:18:56.237818 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:18:56.237829 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:18:56.237840 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-23 07:18:56.237851 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:18:56.237862 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-23 07:18:56.237873 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-23 07:18:56.237884 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-23 07:18:56.237894 | orchestrator | 2025-09-23 07:18:56.237905 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-09-23 07:18:56.237916 | orchestrator | Tuesday 23 September 2025 07:18:51 +0000 (0:00:00.694) 0:03:25.416 ***** 2025-09-23 07:18:56.237926 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-23 07:18:56.237938 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-23 07:18:56.237949 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-23 07:18:56.237960 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-23 07:18:56.237970 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-23 07:18:56.237981 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-23 07:18:56.237992 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-23 07:18:56.238002 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-23 07:18:56.238072 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-23 07:18:56.238087 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-23 07:18:56.238098 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:18:56.238108 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-23 07:18:56.238119 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-23 07:18:56.238137 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-23 07:18:56.238148 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-23 07:18:56.238167 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-23 07:18:56.238177 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-23 07:18:56.238188 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-23 07:18:56.238208 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-23 07:18:59.004233 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-23 07:18:59.004388 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-23 07:18:59.004407 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-23 07:18:59.004417 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-23 07:18:59.004442 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-23 07:18:59.004452 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-23 07:18:59.004461 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:18:59.004471 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-23 07:18:59.004480 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-23 07:18:59.004489 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-23 07:18:59.004501 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-23 07:18:59.004510 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-23 07:18:59.004519 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-23 07:18:59.004527 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-23 07:18:59.004536 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-23 07:18:59.004545 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-23 07:18:59.004553 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-23 07:18:59.004562 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-23 07:18:59.004570 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:18:59.004579 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-23 07:18:59.004588 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-23 07:18:59.004596 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-23 07:18:59.004605 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-23 07:18:59.004613 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-23 07:18:59.004622 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:18:59.004631 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-23 07:18:59.004639 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-23 07:18:59.004670 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-23 07:18:59.004680 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-23 07:18:59.004688 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-23 07:18:59.004697 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-23 07:18:59.004705 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-23 07:18:59.004714 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-23 07:18:59.004723 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-23 07:18:59.004731 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-23 07:18:59.004740 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-23 07:18:59.004748 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-23 07:18:59.004757 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-23 07:18:59.004765 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-23 07:18:59.004774 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-23 07:18:59.004784 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-23 07:18:59.004795 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-23 07:18:59.004805 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-23 07:18:59.004832 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-23 07:18:59.004843 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-23 07:18:59.004853 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-23 07:18:59.004863 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-23 07:18:59.004873 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-23 07:18:59.004888 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-23 07:18:59.004899 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-23 07:18:59.004909 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-23 07:18:59.004919 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-23 07:18:59.004930 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-23 07:18:59.004940 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-23 07:18:59.004950 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-23 07:18:59.004960 | orchestrator | 2025-09-23 07:18:59.004970 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-09-23 07:18:59.004980 | orchestrator | Tuesday 23 September 2025 07:18:56 +0000 (0:00:04.607) 0:03:30.023 ***** 2025-09-23 07:18:59.004991 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-23 07:18:59.005000 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-23 07:18:59.005011 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-23 07:18:59.005027 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-23 07:18:59.005038 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-23 07:18:59.005048 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-23 07:18:59.005058 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-23 07:18:59.005068 | orchestrator | 2025-09-23 07:18:59.005078 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-09-23 07:18:59.005088 | orchestrator | Tuesday 23 September 2025 07:18:57 +0000 (0:00:01.610) 0:03:31.634 ***** 2025-09-23 07:18:59.005098 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-23 07:18:59.005108 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:18:59.005118 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-23 07:18:59.005129 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:18:59.005139 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-23 07:18:59.005149 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:18:59.005160 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-23 07:18:59.005170 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:18:59.005196 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-23 07:18:59.005207 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-23 07:18:59.005225 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-23 07:18:59.005234 | orchestrator | 2025-09-23 07:18:59.005242 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2025-09-23 07:18:59.005251 | orchestrator | Tuesday 23 September 2025 07:18:58 +0000 (0:00:00.615) 0:03:32.250 ***** 2025-09-23 07:18:59.005259 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-23 07:18:59.005268 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:18:59.005277 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-23 07:18:59.005285 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-23 07:18:59.005294 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:18:59.005303 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:18:59.005311 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-23 07:18:59.005343 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:18:59.005352 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-23 07:18:59.005360 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-23 07:18:59.005369 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-23 07:18:59.005378 | orchestrator | 2025-09-23 07:18:59.005391 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-09-23 07:19:12.651270 | orchestrator | Tuesday 23 September 2025 07:18:58 +0000 (0:00:00.544) 0:03:32.795 ***** 2025-09-23 07:19:12.651434 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-23 07:19:12.651455 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:19:12.651468 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-23 07:19:12.651495 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-23 07:19:12.651557 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:19:12.651571 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-23 07:19:12.651582 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:19:12.651593 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:19:12.651604 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-23 07:19:12.651615 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-23 07:19:12.651626 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-23 07:19:12.651637 | orchestrator | 2025-09-23 07:19:12.651648 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-09-23 07:19:12.651659 | orchestrator | Tuesday 23 September 2025 07:18:59 +0000 (0:00:00.706) 0:03:33.501 ***** 2025-09-23 07:19:12.651670 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:19:12.651681 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:19:12.651691 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:19:12.651702 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:19:12.651713 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:19:12.651723 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:19:12.651734 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:19:12.651745 | orchestrator | 2025-09-23 07:19:12.651756 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-09-23 07:19:12.651767 | orchestrator | Tuesday 23 September 2025 07:18:59 +0000 (0:00:00.281) 0:03:33.783 ***** 2025-09-23 07:19:12.651778 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:19:12.651790 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:19:12.651803 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:19:12.651815 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:19:12.651828 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:19:12.651839 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:19:12.651853 | orchestrator | ok: [testbed-manager] 2025-09-23 07:19:12.651865 | orchestrator | 2025-09-23 07:19:12.651877 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-09-23 07:19:12.651889 | orchestrator | Tuesday 23 September 2025 07:19:06 +0000 (0:00:06.554) 0:03:40.337 ***** 2025-09-23 07:19:12.651902 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-09-23 07:19:12.651914 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:19:12.651926 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-09-23 07:19:12.651939 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-09-23 07:19:12.651951 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:19:12.651963 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-09-23 07:19:12.651975 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:19:12.651987 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:19:12.651999 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-09-23 07:19:12.652009 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-09-23 07:19:12.652020 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:19:12.652031 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:19:12.652041 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-09-23 07:19:12.652052 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:19:12.652062 | orchestrator | 2025-09-23 07:19:12.652073 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-09-23 07:19:12.652084 | orchestrator | Tuesday 23 September 2025 07:19:06 +0000 (0:00:00.335) 0:03:40.673 ***** 2025-09-23 07:19:12.652095 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-09-23 07:19:12.652106 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-09-23 07:19:12.652117 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-09-23 07:19:12.652127 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-09-23 07:19:12.652146 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-09-23 07:19:12.652157 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-09-23 07:19:12.652167 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-09-23 07:19:12.652178 | orchestrator | 2025-09-23 07:19:12.652189 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-09-23 07:19:12.652200 | orchestrator | Tuesday 23 September 2025 07:19:07 +0000 (0:00:01.048) 0:03:41.721 ***** 2025-09-23 07:19:12.652212 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:19:12.652226 | orchestrator | 2025-09-23 07:19:12.652237 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-09-23 07:19:12.652248 | orchestrator | Tuesday 23 September 2025 07:19:08 +0000 (0:00:00.491) 0:03:42.212 ***** 2025-09-23 07:19:12.652259 | orchestrator | ok: [testbed-manager] 2025-09-23 07:19:12.652270 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:19:12.652280 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:19:12.652291 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:19:12.652302 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:19:12.652339 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:19:12.652352 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:19:12.652363 | orchestrator | 2025-09-23 07:19:12.652373 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-09-23 07:19:12.652384 | orchestrator | Tuesday 23 September 2025 07:19:09 +0000 (0:00:01.388) 0:03:43.601 ***** 2025-09-23 07:19:12.652395 | orchestrator | ok: [testbed-manager] 2025-09-23 07:19:12.652424 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:19:12.652436 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:19:12.652446 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:19:12.652457 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:19:12.652467 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:19:12.652478 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:19:12.652488 | orchestrator | 2025-09-23 07:19:12.652499 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-09-23 07:19:12.652510 | orchestrator | Tuesday 23 September 2025 07:19:10 +0000 (0:00:00.622) 0:03:44.223 ***** 2025-09-23 07:19:12.652520 | orchestrator | changed: [testbed-manager] 2025-09-23 07:19:12.652531 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:19:12.652542 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:19:12.652553 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:19:12.652563 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:19:12.652574 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:19:12.652584 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:19:12.652595 | orchestrator | 2025-09-23 07:19:12.652606 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-09-23 07:19:12.652616 | orchestrator | Tuesday 23 September 2025 07:19:11 +0000 (0:00:00.643) 0:03:44.866 ***** 2025-09-23 07:19:12.652627 | orchestrator | ok: [testbed-manager] 2025-09-23 07:19:12.652638 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:19:12.652648 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:19:12.652659 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:19:12.652669 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:19:12.652680 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:19:12.652691 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:19:12.652701 | orchestrator | 2025-09-23 07:19:12.652712 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-09-23 07:19:12.652723 | orchestrator | Tuesday 23 September 2025 07:19:11 +0000 (0:00:00.606) 0:03:45.472 ***** 2025-09-23 07:19:12.652738 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758610446.143594, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-23 07:19:12.652767 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758610487.3310227, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-23 07:19:12.652779 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758610470.8561888, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-23 07:19:12.652790 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758610474.989308, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-23 07:19:12.652802 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758610490.1571424, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-23 07:19:12.652830 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758610476.9247532, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-23 07:19:28.833332 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758610475.5045495, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-23 07:19:28.833434 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-23 07:19:28.833472 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-23 07:19:28.833485 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-23 07:19:28.833497 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-23 07:19:28.833508 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-23 07:19:28.833519 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-23 07:19:28.833561 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-23 07:19:28.833575 | orchestrator | 2025-09-23 07:19:28.833588 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-09-23 07:19:28.833600 | orchestrator | Tuesday 23 September 2025 07:19:12 +0000 (0:00:00.966) 0:03:46.439 ***** 2025-09-23 07:19:28.833611 | orchestrator | changed: [testbed-manager] 2025-09-23 07:19:28.833631 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:19:28.833642 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:19:28.833652 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:19:28.833663 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:19:28.833674 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:19:28.833684 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:19:28.833695 | orchestrator | 2025-09-23 07:19:28.833706 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-09-23 07:19:28.833717 | orchestrator | Tuesday 23 September 2025 07:19:13 +0000 (0:00:01.091) 0:03:47.531 ***** 2025-09-23 07:19:28.833728 | orchestrator | changed: [testbed-manager] 2025-09-23 07:19:28.833738 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:19:28.833749 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:19:28.833759 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:19:28.833770 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:19:28.833781 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:19:28.833791 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:19:28.833802 | orchestrator | 2025-09-23 07:19:28.833813 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-09-23 07:19:28.833824 | orchestrator | Tuesday 23 September 2025 07:19:14 +0000 (0:00:01.165) 0:03:48.696 ***** 2025-09-23 07:19:28.833834 | orchestrator | changed: [testbed-manager] 2025-09-23 07:19:28.833846 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:19:28.833858 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:19:28.833871 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:19:28.833883 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:19:28.833895 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:19:28.833907 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:19:28.833919 | orchestrator | 2025-09-23 07:19:28.833932 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-09-23 07:19:28.833946 | orchestrator | Tuesday 23 September 2025 07:19:15 +0000 (0:00:01.098) 0:03:49.795 ***** 2025-09-23 07:19:28.833959 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:19:28.833971 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:19:28.833983 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:19:28.833995 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:19:28.834007 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:19:28.834071 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:19:28.834084 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:19:28.834096 | orchestrator | 2025-09-23 07:19:28.834108 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-09-23 07:19:28.834129 | orchestrator | Tuesday 23 September 2025 07:19:16 +0000 (0:00:00.217) 0:03:50.013 ***** 2025-09-23 07:19:28.834141 | orchestrator | ok: [testbed-manager] 2025-09-23 07:19:28.834152 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:19:28.834163 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:19:28.834173 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:19:28.834184 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:19:28.834194 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:19:28.834205 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:19:28.834215 | orchestrator | 2025-09-23 07:19:28.834226 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-09-23 07:19:28.834237 | orchestrator | Tuesday 23 September 2025 07:19:16 +0000 (0:00:00.686) 0:03:50.699 ***** 2025-09-23 07:19:28.834249 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:19:28.834261 | orchestrator | 2025-09-23 07:19:28.834272 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-09-23 07:19:28.834283 | orchestrator | Tuesday 23 September 2025 07:19:17 +0000 (0:00:00.345) 0:03:51.045 ***** 2025-09-23 07:19:28.834293 | orchestrator | ok: [testbed-manager] 2025-09-23 07:19:28.834304 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:19:28.834386 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:19:28.834401 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:19:28.834412 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:19:28.834423 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:19:28.834433 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:19:28.834444 | orchestrator | 2025-09-23 07:19:28.834454 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-09-23 07:19:28.834465 | orchestrator | Tuesday 23 September 2025 07:19:25 +0000 (0:00:08.245) 0:03:59.290 ***** 2025-09-23 07:19:28.834476 | orchestrator | ok: [testbed-manager] 2025-09-23 07:19:28.834487 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:19:28.834497 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:19:28.834508 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:19:28.834518 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:19:28.834529 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:19:28.834539 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:19:28.834549 | orchestrator | 2025-09-23 07:19:28.834560 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-09-23 07:19:28.834571 | orchestrator | Tuesday 23 September 2025 07:19:26 +0000 (0:00:01.408) 0:04:00.699 ***** 2025-09-23 07:19:28.834582 | orchestrator | ok: [testbed-manager] 2025-09-23 07:19:28.834592 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:19:28.834603 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:19:28.834613 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:19:28.834623 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:19:28.834634 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:19:28.834644 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:19:28.834654 | orchestrator | 2025-09-23 07:19:28.834680 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-09-23 07:20:37.697728 | orchestrator | Tuesday 23 September 2025 07:19:28 +0000 (0:00:01.922) 0:04:02.621 ***** 2025-09-23 07:20:37.697812 | orchestrator | ok: [testbed-manager] 2025-09-23 07:20:37.697820 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:20:37.697827 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:20:37.697832 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:20:37.697838 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:20:37.697843 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:20:37.697849 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:20:37.697854 | orchestrator | 2025-09-23 07:20:37.697860 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-09-23 07:20:37.697867 | orchestrator | Tuesday 23 September 2025 07:19:29 +0000 (0:00:00.235) 0:04:02.857 ***** 2025-09-23 07:20:37.697872 | orchestrator | ok: [testbed-manager] 2025-09-23 07:20:37.697877 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:20:37.697882 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:20:37.697887 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:20:37.697892 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:20:37.697897 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:20:37.697902 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:20:37.697908 | orchestrator | 2025-09-23 07:20:37.697913 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-09-23 07:20:37.697918 | orchestrator | Tuesday 23 September 2025 07:19:29 +0000 (0:00:00.329) 0:04:03.187 ***** 2025-09-23 07:20:37.697923 | orchestrator | ok: [testbed-manager] 2025-09-23 07:20:37.697928 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:20:37.697933 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:20:37.697938 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:20:37.697944 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:20:37.697949 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:20:37.697954 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:20:37.697959 | orchestrator | 2025-09-23 07:20:37.697965 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-09-23 07:20:37.697970 | orchestrator | Tuesday 23 September 2025 07:19:29 +0000 (0:00:00.234) 0:04:03.421 ***** 2025-09-23 07:20:37.697975 | orchestrator | ok: [testbed-manager] 2025-09-23 07:20:37.697999 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:20:37.698004 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:20:37.698009 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:20:37.698052 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:20:37.698059 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:20:37.698064 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:20:37.698069 | orchestrator | 2025-09-23 07:20:37.698074 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-09-23 07:20:37.698079 | orchestrator | Tuesday 23 September 2025 07:19:35 +0000 (0:00:05.720) 0:04:09.141 ***** 2025-09-23 07:20:37.698085 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:20:37.698092 | orchestrator | 2025-09-23 07:20:37.698097 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-09-23 07:20:37.698102 | orchestrator | Tuesday 23 September 2025 07:19:35 +0000 (0:00:00.401) 0:04:09.542 ***** 2025-09-23 07:20:37.698108 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-09-23 07:20:37.698113 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-09-23 07:20:37.698119 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:20:37.698124 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-09-23 07:20:37.698129 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-09-23 07:20:37.698134 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-09-23 07:20:37.698139 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-09-23 07:20:37.698145 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:20:37.698150 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-09-23 07:20:37.698155 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:20:37.698160 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-09-23 07:20:37.698165 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-09-23 07:20:37.698170 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:20:37.698175 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-09-23 07:20:37.698180 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-09-23 07:20:37.698185 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-09-23 07:20:37.698190 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:20:37.698195 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:20:37.698200 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-09-23 07:20:37.698205 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-09-23 07:20:37.698210 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:20:37.698215 | orchestrator | 2025-09-23 07:20:37.698220 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-09-23 07:20:37.698225 | orchestrator | Tuesday 23 September 2025 07:19:36 +0000 (0:00:00.376) 0:04:09.919 ***** 2025-09-23 07:20:37.698231 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:20:37.698236 | orchestrator | 2025-09-23 07:20:37.698241 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-09-23 07:20:37.698247 | orchestrator | Tuesday 23 September 2025 07:19:36 +0000 (0:00:00.414) 0:04:10.333 ***** 2025-09-23 07:20:37.698252 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-09-23 07:20:37.698257 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:20:37.698262 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-09-23 07:20:37.698267 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-09-23 07:20:37.698272 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:20:37.698313 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-09-23 07:20:37.698319 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:20:37.698325 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:20:37.698331 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-09-23 07:20:37.698337 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-09-23 07:20:37.698343 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:20:37.698349 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:20:37.698355 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-09-23 07:20:37.698361 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:20:37.698367 | orchestrator | 2025-09-23 07:20:37.698373 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-09-23 07:20:37.698378 | orchestrator | Tuesday 23 September 2025 07:19:36 +0000 (0:00:00.316) 0:04:10.649 ***** 2025-09-23 07:20:37.698384 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:20:37.698390 | orchestrator | 2025-09-23 07:20:37.698396 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-09-23 07:20:37.698402 | orchestrator | Tuesday 23 September 2025 07:19:37 +0000 (0:00:00.401) 0:04:11.051 ***** 2025-09-23 07:20:37.698407 | orchestrator | changed: [testbed-manager] 2025-09-23 07:20:37.698413 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:20:37.698419 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:20:37.698424 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:20:37.698430 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:20:37.698436 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:20:37.698442 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:20:37.698447 | orchestrator | 2025-09-23 07:20:37.698453 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-09-23 07:20:37.698459 | orchestrator | Tuesday 23 September 2025 07:20:11 +0000 (0:00:34.292) 0:04:45.343 ***** 2025-09-23 07:20:37.698465 | orchestrator | changed: [testbed-manager] 2025-09-23 07:20:37.698471 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:20:37.698476 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:20:37.698482 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:20:37.698488 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:20:37.698493 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:20:37.698499 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:20:37.698505 | orchestrator | 2025-09-23 07:20:37.698511 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-09-23 07:20:37.698516 | orchestrator | Tuesday 23 September 2025 07:20:19 +0000 (0:00:08.018) 0:04:53.362 ***** 2025-09-23 07:20:37.698522 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:20:37.698528 | orchestrator | changed: [testbed-manager] 2025-09-23 07:20:37.698534 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:20:37.698551 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:20:37.698557 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:20:37.698563 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:20:37.698569 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:20:37.698575 | orchestrator | 2025-09-23 07:20:37.698580 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-09-23 07:20:37.698586 | orchestrator | Tuesday 23 September 2025 07:20:27 +0000 (0:00:07.689) 0:05:01.052 ***** 2025-09-23 07:20:37.698592 | orchestrator | ok: [testbed-manager] 2025-09-23 07:20:37.698598 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:20:37.698604 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:20:37.698609 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:20:37.698615 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:20:37.698621 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:20:37.698627 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:20:37.698637 | orchestrator | 2025-09-23 07:20:37.698642 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-09-23 07:20:37.698649 | orchestrator | Tuesday 23 September 2025 07:20:28 +0000 (0:00:01.709) 0:05:02.761 ***** 2025-09-23 07:20:37.698654 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:20:37.698659 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:20:37.698664 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:20:37.698669 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:20:37.698674 | orchestrator | changed: [testbed-manager] 2025-09-23 07:20:37.698679 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:20:37.698684 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:20:37.698689 | orchestrator | 2025-09-23 07:20:37.698694 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-09-23 07:20:37.698699 | orchestrator | Tuesday 23 September 2025 07:20:34 +0000 (0:00:05.764) 0:05:08.526 ***** 2025-09-23 07:20:37.698706 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:20:37.698712 | orchestrator | 2025-09-23 07:20:37.698717 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-09-23 07:20:37.698722 | orchestrator | Tuesday 23 September 2025 07:20:35 +0000 (0:00:00.539) 0:05:09.066 ***** 2025-09-23 07:20:37.698727 | orchestrator | changed: [testbed-manager] 2025-09-23 07:20:37.698732 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:20:37.698737 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:20:37.698743 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:20:37.698747 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:20:37.698752 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:20:37.698757 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:20:37.698762 | orchestrator | 2025-09-23 07:20:37.698768 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-09-23 07:20:37.698773 | orchestrator | Tuesday 23 September 2025 07:20:35 +0000 (0:00:00.728) 0:05:09.794 ***** 2025-09-23 07:20:37.698778 | orchestrator | ok: [testbed-manager] 2025-09-23 07:20:37.698783 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:20:37.698788 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:20:37.698793 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:20:37.698804 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:20:53.115093 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:20:53.115202 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:20:53.115217 | orchestrator | 2025-09-23 07:20:53.115230 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-09-23 07:20:53.115242 | orchestrator | Tuesday 23 September 2025 07:20:37 +0000 (0:00:01.690) 0:05:11.485 ***** 2025-09-23 07:20:53.115253 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:20:53.115265 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:20:53.115275 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:20:53.115335 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:20:53.115347 | orchestrator | changed: [testbed-manager] 2025-09-23 07:20:53.115358 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:20:53.115369 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:20:53.115379 | orchestrator | 2025-09-23 07:20:53.115391 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-09-23 07:20:53.115402 | orchestrator | Tuesday 23 September 2025 07:20:38 +0000 (0:00:00.766) 0:05:12.251 ***** 2025-09-23 07:20:53.115413 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:20:53.115424 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:20:53.115435 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:20:53.115446 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:20:53.115457 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:20:53.115467 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:20:53.115478 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:20:53.115489 | orchestrator | 2025-09-23 07:20:53.115528 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-09-23 07:20:53.115539 | orchestrator | Tuesday 23 September 2025 07:20:38 +0000 (0:00:00.273) 0:05:12.525 ***** 2025-09-23 07:20:53.115550 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:20:53.115561 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:20:53.115571 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:20:53.115582 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:20:53.115593 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:20:53.115604 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:20:53.115615 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:20:53.115625 | orchestrator | 2025-09-23 07:20:53.115638 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-09-23 07:20:53.115651 | orchestrator | Tuesday 23 September 2025 07:20:39 +0000 (0:00:00.392) 0:05:12.918 ***** 2025-09-23 07:20:53.115664 | orchestrator | ok: [testbed-manager] 2025-09-23 07:20:53.115677 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:20:53.115689 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:20:53.115701 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:20:53.115713 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:20:53.115725 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:20:53.115737 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:20:53.115749 | orchestrator | 2025-09-23 07:20:53.115762 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-09-23 07:20:53.115775 | orchestrator | Tuesday 23 September 2025 07:20:39 +0000 (0:00:00.311) 0:05:13.229 ***** 2025-09-23 07:20:53.115787 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:20:53.115800 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:20:53.115813 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:20:53.115825 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:20:53.115837 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:20:53.115849 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:20:53.115861 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:20:53.115873 | orchestrator | 2025-09-23 07:20:53.115886 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-09-23 07:20:53.115899 | orchestrator | Tuesday 23 September 2025 07:20:39 +0000 (0:00:00.243) 0:05:13.473 ***** 2025-09-23 07:20:53.115911 | orchestrator | ok: [testbed-manager] 2025-09-23 07:20:53.115924 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:20:53.115936 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:20:53.115948 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:20:53.115960 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:20:53.115972 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:20:53.115985 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:20:53.115996 | orchestrator | 2025-09-23 07:20:53.116007 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-09-23 07:20:53.116018 | orchestrator | Tuesday 23 September 2025 07:20:39 +0000 (0:00:00.312) 0:05:13.785 ***** 2025-09-23 07:20:53.116029 | orchestrator | ok: [testbed-manager] =>  2025-09-23 07:20:53.116040 | orchestrator |  docker_version: 5:27.5.1 2025-09-23 07:20:53.116050 | orchestrator | ok: [testbed-node-0] =>  2025-09-23 07:20:53.116061 | orchestrator |  docker_version: 5:27.5.1 2025-09-23 07:20:53.116071 | orchestrator | ok: [testbed-node-1] =>  2025-09-23 07:20:53.116082 | orchestrator |  docker_version: 5:27.5.1 2025-09-23 07:20:53.116092 | orchestrator | ok: [testbed-node-2] =>  2025-09-23 07:20:53.116103 | orchestrator |  docker_version: 5:27.5.1 2025-09-23 07:20:53.116113 | orchestrator | ok: [testbed-node-3] =>  2025-09-23 07:20:53.116124 | orchestrator |  docker_version: 5:27.5.1 2025-09-23 07:20:53.116134 | orchestrator | ok: [testbed-node-4] =>  2025-09-23 07:20:53.116145 | orchestrator |  docker_version: 5:27.5.1 2025-09-23 07:20:53.116155 | orchestrator | ok: [testbed-node-5] =>  2025-09-23 07:20:53.116166 | orchestrator |  docker_version: 5:27.5.1 2025-09-23 07:20:53.116176 | orchestrator | 2025-09-23 07:20:53.116187 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-09-23 07:20:53.116198 | orchestrator | Tuesday 23 September 2025 07:20:40 +0000 (0:00:00.255) 0:05:14.040 ***** 2025-09-23 07:20:53.116217 | orchestrator | ok: [testbed-manager] =>  2025-09-23 07:20:53.116228 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-23 07:20:53.116238 | orchestrator | ok: [testbed-node-0] =>  2025-09-23 07:20:53.116249 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-23 07:20:53.116259 | orchestrator | ok: [testbed-node-1] =>  2025-09-23 07:20:53.116270 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-23 07:20:53.116280 | orchestrator | ok: [testbed-node-2] =>  2025-09-23 07:20:53.116310 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-23 07:20:53.116321 | orchestrator | ok: [testbed-node-3] =>  2025-09-23 07:20:53.116332 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-23 07:20:53.116342 | orchestrator | ok: [testbed-node-4] =>  2025-09-23 07:20:53.116352 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-23 07:20:53.116363 | orchestrator | ok: [testbed-node-5] =>  2025-09-23 07:20:53.116373 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-23 07:20:53.116384 | orchestrator | 2025-09-23 07:20:53.116395 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-09-23 07:20:53.116438 | orchestrator | Tuesday 23 September 2025 07:20:40 +0000 (0:00:00.288) 0:05:14.329 ***** 2025-09-23 07:20:53.116450 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:20:53.116461 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:20:53.116471 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:20:53.116482 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:20:53.116492 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:20:53.116503 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:20:53.116513 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:20:53.116523 | orchestrator | 2025-09-23 07:20:53.116534 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-09-23 07:20:53.116545 | orchestrator | Tuesday 23 September 2025 07:20:40 +0000 (0:00:00.294) 0:05:14.623 ***** 2025-09-23 07:20:53.116556 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:20:53.116566 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:20:53.116577 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:20:53.116587 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:20:53.116598 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:20:53.116608 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:20:53.116619 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:20:53.116629 | orchestrator | 2025-09-23 07:20:53.116640 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-09-23 07:20:53.116651 | orchestrator | Tuesday 23 September 2025 07:20:41 +0000 (0:00:00.278) 0:05:14.902 ***** 2025-09-23 07:20:53.116664 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:20:53.116677 | orchestrator | 2025-09-23 07:20:53.116687 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-09-23 07:20:53.116698 | orchestrator | Tuesday 23 September 2025 07:20:41 +0000 (0:00:00.422) 0:05:15.324 ***** 2025-09-23 07:20:53.116709 | orchestrator | ok: [testbed-manager] 2025-09-23 07:20:53.116719 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:20:53.116730 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:20:53.116741 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:20:53.116751 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:20:53.116762 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:20:53.116772 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:20:53.116783 | orchestrator | 2025-09-23 07:20:53.116794 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-09-23 07:20:53.116804 | orchestrator | Tuesday 23 September 2025 07:20:42 +0000 (0:00:00.875) 0:05:16.200 ***** 2025-09-23 07:20:53.116815 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:20:53.116826 | orchestrator | ok: [testbed-manager] 2025-09-23 07:20:53.116836 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:20:53.116854 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:20:53.116868 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:20:53.116888 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:20:53.116935 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:20:53.116953 | orchestrator | 2025-09-23 07:20:53.116971 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-09-23 07:20:53.116991 | orchestrator | Tuesday 23 September 2025 07:20:45 +0000 (0:00:03.338) 0:05:19.539 ***** 2025-09-23 07:20:53.117009 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-09-23 07:20:53.117028 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-09-23 07:20:53.117046 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-09-23 07:20:53.117064 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-09-23 07:20:53.117084 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-09-23 07:20:53.117104 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-09-23 07:20:53.117123 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:20:53.117142 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-09-23 07:20:53.117160 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-09-23 07:20:53.117178 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-09-23 07:20:53.117197 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:20:53.117214 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-09-23 07:20:53.117232 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-09-23 07:20:53.117250 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-09-23 07:20:53.117268 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:20:53.117328 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-09-23 07:20:53.117348 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-09-23 07:20:53.117367 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:20:53.117385 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-09-23 07:20:53.117402 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-09-23 07:20:53.117420 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-09-23 07:20:53.117438 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-09-23 07:20:53.117456 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:20:53.117467 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:20:53.117478 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-09-23 07:20:53.117489 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-09-23 07:20:53.117499 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-09-23 07:20:53.117510 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:20:53.117520 | orchestrator | 2025-09-23 07:20:53.117531 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-09-23 07:20:53.117542 | orchestrator | Tuesday 23 September 2025 07:20:46 +0000 (0:00:00.580) 0:05:20.119 ***** 2025-09-23 07:20:53.117553 | orchestrator | ok: [testbed-manager] 2025-09-23 07:20:53.117563 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:20:53.117574 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:20:53.117584 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:20:53.117595 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:20:53.117605 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:20:53.117616 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:20:53.117626 | orchestrator | 2025-09-23 07:20:53.117660 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-09-23 07:21:46.624009 | orchestrator | Tuesday 23 September 2025 07:20:53 +0000 (0:00:06.780) 0:05:26.900 ***** 2025-09-23 07:21:46.624120 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:21:46.624135 | orchestrator | ok: [testbed-manager] 2025-09-23 07:21:46.624147 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:21:46.624159 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:21:46.624195 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:21:46.624206 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:21:46.624216 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:21:46.624227 | orchestrator | 2025-09-23 07:21:46.624239 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-09-23 07:21:46.624250 | orchestrator | Tuesday 23 September 2025 07:20:54 +0000 (0:00:01.204) 0:05:28.105 ***** 2025-09-23 07:21:46.624261 | orchestrator | ok: [testbed-manager] 2025-09-23 07:21:46.624271 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:21:46.624343 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:21:46.624354 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:21:46.624366 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:21:46.624377 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:21:46.624387 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:21:46.624398 | orchestrator | 2025-09-23 07:21:46.624409 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-09-23 07:21:46.624420 | orchestrator | Tuesday 23 September 2025 07:21:02 +0000 (0:00:08.322) 0:05:36.428 ***** 2025-09-23 07:21:46.624430 | orchestrator | changed: [testbed-manager] 2025-09-23 07:21:46.624441 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:21:46.624452 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:21:46.624462 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:21:46.624473 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:21:46.624483 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:21:46.624493 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:21:46.624504 | orchestrator | 2025-09-23 07:21:46.624515 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-09-23 07:21:46.624525 | orchestrator | Tuesday 23 September 2025 07:21:06 +0000 (0:00:03.472) 0:05:39.900 ***** 2025-09-23 07:21:46.624536 | orchestrator | ok: [testbed-manager] 2025-09-23 07:21:46.624546 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:21:46.624557 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:21:46.624567 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:21:46.624577 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:21:46.624588 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:21:46.624598 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:21:46.624609 | orchestrator | 2025-09-23 07:21:46.624619 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-09-23 07:21:46.624630 | orchestrator | Tuesday 23 September 2025 07:21:07 +0000 (0:00:01.325) 0:05:41.225 ***** 2025-09-23 07:21:46.624641 | orchestrator | ok: [testbed-manager] 2025-09-23 07:21:46.624651 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:21:46.624662 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:21:46.624672 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:21:46.624683 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:21:46.624693 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:21:46.624704 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:21:46.624714 | orchestrator | 2025-09-23 07:21:46.624725 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-09-23 07:21:46.624736 | orchestrator | Tuesday 23 September 2025 07:21:08 +0000 (0:00:01.500) 0:05:42.725 ***** 2025-09-23 07:21:46.624746 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:21:46.624757 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:21:46.624767 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:21:46.624778 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:21:46.624789 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:21:46.624799 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:21:46.624810 | orchestrator | changed: [testbed-manager] 2025-09-23 07:21:46.624820 | orchestrator | 2025-09-23 07:21:46.624831 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-09-23 07:21:46.624842 | orchestrator | Tuesday 23 September 2025 07:21:09 +0000 (0:00:00.596) 0:05:43.322 ***** 2025-09-23 07:21:46.624853 | orchestrator | ok: [testbed-manager] 2025-09-23 07:21:46.624873 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:21:46.624884 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:21:46.624894 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:21:46.624905 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:21:46.624915 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:21:46.624926 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:21:46.624937 | orchestrator | 2025-09-23 07:21:46.624947 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-09-23 07:21:46.624958 | orchestrator | Tuesday 23 September 2025 07:21:19 +0000 (0:00:09.722) 0:05:53.044 ***** 2025-09-23 07:21:46.624969 | orchestrator | changed: [testbed-manager] 2025-09-23 07:21:46.624979 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:21:46.624990 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:21:46.625000 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:21:46.625010 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:21:46.625021 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:21:46.625031 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:21:46.625042 | orchestrator | 2025-09-23 07:21:46.625052 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-09-23 07:21:46.625063 | orchestrator | Tuesday 23 September 2025 07:21:20 +0000 (0:00:00.929) 0:05:53.974 ***** 2025-09-23 07:21:46.625074 | orchestrator | ok: [testbed-manager] 2025-09-23 07:21:46.625084 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:21:46.625095 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:21:46.625105 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:21:46.625116 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:21:46.625126 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:21:46.625137 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:21:46.625147 | orchestrator | 2025-09-23 07:21:46.625158 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-09-23 07:21:46.625168 | orchestrator | Tuesday 23 September 2025 07:21:28 +0000 (0:00:08.614) 0:06:02.589 ***** 2025-09-23 07:21:46.625179 | orchestrator | ok: [testbed-manager] 2025-09-23 07:21:46.625190 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:21:46.625200 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:21:46.625211 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:21:46.625235 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:21:46.625246 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:21:46.625292 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:21:46.625304 | orchestrator | 2025-09-23 07:21:46.625316 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-09-23 07:21:46.625326 | orchestrator | Tuesday 23 September 2025 07:21:39 +0000 (0:00:10.694) 0:06:13.283 ***** 2025-09-23 07:21:46.625337 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-09-23 07:21:46.625348 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-09-23 07:21:46.625359 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-09-23 07:21:46.625370 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-09-23 07:21:46.625380 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-09-23 07:21:46.625391 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-09-23 07:21:46.625401 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-09-23 07:21:46.625412 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-09-23 07:21:46.625422 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-09-23 07:21:46.625433 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-09-23 07:21:46.625443 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-09-23 07:21:46.625453 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-09-23 07:21:46.625464 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-09-23 07:21:46.625474 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-09-23 07:21:46.625485 | orchestrator | 2025-09-23 07:21:46.625495 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-09-23 07:21:46.625514 | orchestrator | Tuesday 23 September 2025 07:21:40 +0000 (0:00:01.194) 0:06:14.478 ***** 2025-09-23 07:21:46.625525 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:21:46.625535 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:21:46.625546 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:21:46.625556 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:21:46.625567 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:21:46.625577 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:21:46.625588 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:21:46.625598 | orchestrator | 2025-09-23 07:21:46.625609 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-09-23 07:21:46.625619 | orchestrator | Tuesday 23 September 2025 07:21:41 +0000 (0:00:00.535) 0:06:15.013 ***** 2025-09-23 07:21:46.625630 | orchestrator | ok: [testbed-manager] 2025-09-23 07:21:46.625640 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:21:46.625651 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:21:46.625661 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:21:46.625672 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:21:46.625682 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:21:46.625692 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:21:46.625703 | orchestrator | 2025-09-23 07:21:46.625714 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-09-23 07:21:46.625726 | orchestrator | Tuesday 23 September 2025 07:21:44 +0000 (0:00:03.568) 0:06:18.582 ***** 2025-09-23 07:21:46.625737 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:21:46.625747 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:21:46.625758 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:21:46.625768 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:21:46.625779 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:21:46.625789 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:21:46.625799 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:21:46.625810 | orchestrator | 2025-09-23 07:21:46.625821 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-09-23 07:21:46.625832 | orchestrator | Tuesday 23 September 2025 07:21:45 +0000 (0:00:00.534) 0:06:19.116 ***** 2025-09-23 07:21:46.625843 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-09-23 07:21:46.625854 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-09-23 07:21:46.625865 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:21:46.625875 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-09-23 07:21:46.625886 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-09-23 07:21:46.625897 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:21:46.625907 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-09-23 07:21:46.625918 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-09-23 07:21:46.625928 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:21:46.625939 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-09-23 07:21:46.625949 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-09-23 07:21:46.625960 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:21:46.625970 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-09-23 07:21:46.625981 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-09-23 07:21:46.625991 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:21:46.626002 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-09-23 07:21:46.626012 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-09-23 07:21:46.626087 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:21:46.626098 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-09-23 07:21:46.626109 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-09-23 07:21:46.626119 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:21:46.626138 | orchestrator | 2025-09-23 07:21:46.626149 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-09-23 07:21:46.626160 | orchestrator | Tuesday 23 September 2025 07:21:46 +0000 (0:00:00.778) 0:06:19.894 ***** 2025-09-23 07:21:46.626171 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:21:46.626181 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:21:46.626192 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:21:46.626202 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:21:46.626213 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:21:46.626224 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:21:46.626234 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:21:46.626245 | orchestrator | 2025-09-23 07:21:46.626264 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-09-23 07:22:07.751797 | orchestrator | Tuesday 23 September 2025 07:21:46 +0000 (0:00:00.521) 0:06:20.416 ***** 2025-09-23 07:22:07.751951 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:22:07.751968 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:22:07.751979 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:22:07.751989 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:22:07.751999 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:22:07.752009 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:22:07.752018 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:22:07.752028 | orchestrator | 2025-09-23 07:22:07.752038 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-09-23 07:22:07.752049 | orchestrator | Tuesday 23 September 2025 07:21:47 +0000 (0:00:00.528) 0:06:20.945 ***** 2025-09-23 07:22:07.752058 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:22:07.752068 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:22:07.752077 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:22:07.752087 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:22:07.752096 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:22:07.752105 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:22:07.752115 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:22:07.752124 | orchestrator | 2025-09-23 07:22:07.752134 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-09-23 07:22:07.752143 | orchestrator | Tuesday 23 September 2025 07:21:47 +0000 (0:00:00.550) 0:06:21.496 ***** 2025-09-23 07:22:07.752153 | orchestrator | ok: [testbed-manager] 2025-09-23 07:22:07.752164 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:22:07.752174 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:22:07.752183 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:22:07.752192 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:22:07.752202 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:22:07.752211 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:22:07.752220 | orchestrator | 2025-09-23 07:22:07.752230 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-09-23 07:22:07.752240 | orchestrator | Tuesday 23 September 2025 07:21:49 +0000 (0:00:01.730) 0:06:23.226 ***** 2025-09-23 07:22:07.752251 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:22:07.752263 | orchestrator | 2025-09-23 07:22:07.752316 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-09-23 07:22:07.752327 | orchestrator | Tuesday 23 September 2025 07:21:50 +0000 (0:00:01.091) 0:06:24.318 ***** 2025-09-23 07:22:07.752336 | orchestrator | ok: [testbed-manager] 2025-09-23 07:22:07.752346 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:22:07.752355 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:22:07.752365 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:22:07.752374 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:22:07.752384 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:22:07.752393 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:22:07.752403 | orchestrator | 2025-09-23 07:22:07.752432 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-09-23 07:22:07.752442 | orchestrator | Tuesday 23 September 2025 07:21:51 +0000 (0:00:00.836) 0:06:25.154 ***** 2025-09-23 07:22:07.752452 | orchestrator | ok: [testbed-manager] 2025-09-23 07:22:07.752461 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:22:07.752471 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:22:07.752480 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:22:07.752490 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:22:07.752499 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:22:07.752508 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:22:07.752517 | orchestrator | 2025-09-23 07:22:07.752527 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-09-23 07:22:07.752537 | orchestrator | Tuesday 23 September 2025 07:21:52 +0000 (0:00:00.842) 0:06:25.996 ***** 2025-09-23 07:22:07.752547 | orchestrator | ok: [testbed-manager] 2025-09-23 07:22:07.752556 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:22:07.752566 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:22:07.752575 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:22:07.752585 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:22:07.752594 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:22:07.752604 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:22:07.752613 | orchestrator | 2025-09-23 07:22:07.752623 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-09-23 07:22:07.752633 | orchestrator | Tuesday 23 September 2025 07:21:53 +0000 (0:00:01.527) 0:06:27.524 ***** 2025-09-23 07:22:07.752643 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:22:07.752652 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:22:07.752662 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:22:07.752671 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:22:07.752681 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:22:07.752690 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:22:07.752699 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:22:07.752709 | orchestrator | 2025-09-23 07:22:07.752718 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-09-23 07:22:07.752728 | orchestrator | Tuesday 23 September 2025 07:21:55 +0000 (0:00:01.404) 0:06:28.929 ***** 2025-09-23 07:22:07.752737 | orchestrator | ok: [testbed-manager] 2025-09-23 07:22:07.752747 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:22:07.752756 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:22:07.752765 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:22:07.752775 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:22:07.752784 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:22:07.752793 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:22:07.752803 | orchestrator | 2025-09-23 07:22:07.752812 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-09-23 07:22:07.752822 | orchestrator | Tuesday 23 September 2025 07:21:56 +0000 (0:00:01.336) 0:06:30.266 ***** 2025-09-23 07:22:07.752831 | orchestrator | changed: [testbed-manager] 2025-09-23 07:22:07.752841 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:22:07.752850 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:22:07.752860 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:22:07.752869 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:22:07.752879 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:22:07.752888 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:22:07.752898 | orchestrator | 2025-09-23 07:22:07.752923 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-09-23 07:22:07.752934 | orchestrator | Tuesday 23 September 2025 07:21:57 +0000 (0:00:01.429) 0:06:31.696 ***** 2025-09-23 07:22:07.752943 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:22:07.752953 | orchestrator | 2025-09-23 07:22:07.752963 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-09-23 07:22:07.752980 | orchestrator | Tuesday 23 September 2025 07:21:58 +0000 (0:00:01.084) 0:06:32.781 ***** 2025-09-23 07:22:07.752990 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:22:07.752999 | orchestrator | ok: [testbed-manager] 2025-09-23 07:22:07.753008 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:22:07.753018 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:22:07.753027 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:22:07.753037 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:22:07.753046 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:22:07.753055 | orchestrator | 2025-09-23 07:22:07.753065 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-09-23 07:22:07.753074 | orchestrator | Tuesday 23 September 2025 07:22:00 +0000 (0:00:01.393) 0:06:34.174 ***** 2025-09-23 07:22:07.753084 | orchestrator | ok: [testbed-manager] 2025-09-23 07:22:07.753093 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:22:07.753103 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:22:07.753112 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:22:07.753121 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:22:07.753131 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:22:07.753140 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:22:07.753149 | orchestrator | 2025-09-23 07:22:07.753159 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-09-23 07:22:07.753168 | orchestrator | Tuesday 23 September 2025 07:22:01 +0000 (0:00:01.111) 0:06:35.286 ***** 2025-09-23 07:22:07.753178 | orchestrator | ok: [testbed-manager] 2025-09-23 07:22:07.753187 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:22:07.753196 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:22:07.753205 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:22:07.753215 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:22:07.753224 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:22:07.753233 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:22:07.753243 | orchestrator | 2025-09-23 07:22:07.753252 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-09-23 07:22:07.753262 | orchestrator | Tuesday 23 September 2025 07:22:02 +0000 (0:00:01.109) 0:06:36.395 ***** 2025-09-23 07:22:07.753290 | orchestrator | ok: [testbed-manager] 2025-09-23 07:22:07.753301 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:22:07.753310 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:22:07.753320 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:22:07.753329 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:22:07.753338 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:22:07.753348 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:22:07.753357 | orchestrator | 2025-09-23 07:22:07.753367 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-09-23 07:22:07.753376 | orchestrator | Tuesday 23 September 2025 07:22:03 +0000 (0:00:01.108) 0:06:37.504 ***** 2025-09-23 07:22:07.753386 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:22:07.753396 | orchestrator | 2025-09-23 07:22:07.753405 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-23 07:22:07.753415 | orchestrator | Tuesday 23 September 2025 07:22:04 +0000 (0:00:01.154) 0:06:38.659 ***** 2025-09-23 07:22:07.753424 | orchestrator | 2025-09-23 07:22:07.753434 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-23 07:22:07.753443 | orchestrator | Tuesday 23 September 2025 07:22:04 +0000 (0:00:00.041) 0:06:38.700 ***** 2025-09-23 07:22:07.753452 | orchestrator | 2025-09-23 07:22:07.753462 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-23 07:22:07.753471 | orchestrator | Tuesday 23 September 2025 07:22:04 +0000 (0:00:00.045) 0:06:38.745 ***** 2025-09-23 07:22:07.753481 | orchestrator | 2025-09-23 07:22:07.753490 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-23 07:22:07.753499 | orchestrator | Tuesday 23 September 2025 07:22:04 +0000 (0:00:00.039) 0:06:38.784 ***** 2025-09-23 07:22:07.753519 | orchestrator | 2025-09-23 07:22:07.753529 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-23 07:22:07.753539 | orchestrator | Tuesday 23 September 2025 07:22:05 +0000 (0:00:00.040) 0:06:38.824 ***** 2025-09-23 07:22:07.753548 | orchestrator | 2025-09-23 07:22:07.753558 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-23 07:22:07.753567 | orchestrator | Tuesday 23 September 2025 07:22:05 +0000 (0:00:00.046) 0:06:38.871 ***** 2025-09-23 07:22:07.753576 | orchestrator | 2025-09-23 07:22:07.753586 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-23 07:22:07.753595 | orchestrator | Tuesday 23 September 2025 07:22:05 +0000 (0:00:00.040) 0:06:38.911 ***** 2025-09-23 07:22:07.753605 | orchestrator | 2025-09-23 07:22:07.753614 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-23 07:22:07.753624 | orchestrator | Tuesday 23 September 2025 07:22:05 +0000 (0:00:00.040) 0:06:38.951 ***** 2025-09-23 07:22:07.753633 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:22:07.753643 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:22:07.753652 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:22:07.753661 | orchestrator | 2025-09-23 07:22:07.753671 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-09-23 07:22:07.753680 | orchestrator | Tuesday 23 September 2025 07:22:06 +0000 (0:00:01.279) 0:06:40.230 ***** 2025-09-23 07:22:07.753690 | orchestrator | changed: [testbed-manager] 2025-09-23 07:22:07.753700 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:22:07.753714 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:22:07.753724 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:22:07.753733 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:22:07.753748 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:22:36.998810 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:22:36.998920 | orchestrator | 2025-09-23 07:22:36.998936 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-09-23 07:22:36.998949 | orchestrator | Tuesday 23 September 2025 07:22:07 +0000 (0:00:01.306) 0:06:41.537 ***** 2025-09-23 07:22:36.998961 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:22:36.998972 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:22:36.998983 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:22:36.998993 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:22:36.999004 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:22:36.999015 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:22:36.999025 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:22:36.999036 | orchestrator | 2025-09-23 07:22:36.999047 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-09-23 07:22:36.999058 | orchestrator | Tuesday 23 September 2025 07:22:10 +0000 (0:00:02.546) 0:06:44.084 ***** 2025-09-23 07:22:36.999069 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:22:36.999080 | orchestrator | 2025-09-23 07:22:36.999090 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-09-23 07:22:36.999101 | orchestrator | Tuesday 23 September 2025 07:22:10 +0000 (0:00:00.124) 0:06:44.208 ***** 2025-09-23 07:22:36.999112 | orchestrator | ok: [testbed-manager] 2025-09-23 07:22:36.999124 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:22:36.999134 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:22:36.999145 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:22:36.999156 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:22:36.999166 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:22:36.999176 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:22:36.999187 | orchestrator | 2025-09-23 07:22:36.999198 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-09-23 07:22:36.999210 | orchestrator | Tuesday 23 September 2025 07:22:11 +0000 (0:00:00.980) 0:06:45.188 ***** 2025-09-23 07:22:36.999220 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:22:36.999231 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:22:36.999261 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:22:36.999299 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:22:36.999310 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:22:36.999321 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:22:36.999331 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:22:36.999342 | orchestrator | 2025-09-23 07:22:36.999353 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-09-23 07:22:36.999364 | orchestrator | Tuesday 23 September 2025 07:22:11 +0000 (0:00:00.587) 0:06:45.775 ***** 2025-09-23 07:22:36.999375 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:22:36.999389 | orchestrator | 2025-09-23 07:22:36.999399 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-09-23 07:22:36.999410 | orchestrator | Tuesday 23 September 2025 07:22:13 +0000 (0:00:01.112) 0:06:46.888 ***** 2025-09-23 07:22:36.999421 | orchestrator | ok: [testbed-manager] 2025-09-23 07:22:36.999432 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:22:36.999443 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:22:36.999453 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:22:36.999464 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:22:36.999475 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:22:36.999485 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:22:36.999496 | orchestrator | 2025-09-23 07:22:36.999507 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-09-23 07:22:36.999518 | orchestrator | Tuesday 23 September 2025 07:22:13 +0000 (0:00:00.833) 0:06:47.721 ***** 2025-09-23 07:22:36.999529 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-09-23 07:22:36.999540 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-09-23 07:22:36.999551 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-09-23 07:22:36.999561 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-09-23 07:22:36.999572 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-09-23 07:22:36.999583 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-09-23 07:22:36.999593 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-09-23 07:22:36.999604 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-09-23 07:22:36.999615 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-09-23 07:22:36.999626 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-09-23 07:22:36.999636 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-09-23 07:22:36.999647 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-09-23 07:22:36.999657 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-09-23 07:22:36.999668 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-09-23 07:22:36.999679 | orchestrator | 2025-09-23 07:22:36.999689 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-09-23 07:22:36.999700 | orchestrator | Tuesday 23 September 2025 07:22:16 +0000 (0:00:02.479) 0:06:50.201 ***** 2025-09-23 07:22:36.999711 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:22:36.999722 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:22:36.999732 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:22:36.999743 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:22:36.999754 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:22:36.999764 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:22:36.999775 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:22:36.999786 | orchestrator | 2025-09-23 07:22:36.999796 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-09-23 07:22:36.999808 | orchestrator | Tuesday 23 September 2025 07:22:16 +0000 (0:00:00.515) 0:06:50.717 ***** 2025-09-23 07:22:36.999845 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:22:36.999866 | orchestrator | 2025-09-23 07:22:36.999877 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-09-23 07:22:36.999888 | orchestrator | Tuesday 23 September 2025 07:22:17 +0000 (0:00:01.010) 0:06:51.727 ***** 2025-09-23 07:22:36.999898 | orchestrator | ok: [testbed-manager] 2025-09-23 07:22:36.999909 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:22:36.999919 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:22:36.999930 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:22:36.999941 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:22:36.999951 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:22:36.999962 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:22:36.999972 | orchestrator | 2025-09-23 07:22:36.999983 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-09-23 07:22:36.999994 | orchestrator | Tuesday 23 September 2025 07:22:18 +0000 (0:00:00.839) 0:06:52.567 ***** 2025-09-23 07:22:37.000004 | orchestrator | ok: [testbed-manager] 2025-09-23 07:22:37.000016 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:22:37.000027 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:22:37.000037 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:22:37.000048 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:22:37.000059 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:22:37.000069 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:22:37.000080 | orchestrator | 2025-09-23 07:22:37.000091 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-09-23 07:22:37.000102 | orchestrator | Tuesday 23 September 2025 07:22:19 +0000 (0:00:00.881) 0:06:53.449 ***** 2025-09-23 07:22:37.000113 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:22:37.000124 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:22:37.000135 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:22:37.000145 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:22:37.000156 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:22:37.000166 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:22:37.000177 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:22:37.000187 | orchestrator | 2025-09-23 07:22:37.000198 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-09-23 07:22:37.000209 | orchestrator | Tuesday 23 September 2025 07:22:20 +0000 (0:00:00.527) 0:06:53.976 ***** 2025-09-23 07:22:37.000220 | orchestrator | ok: [testbed-manager] 2025-09-23 07:22:37.000230 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:22:37.000241 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:22:37.000252 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:22:37.000263 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:22:37.000302 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:22:37.000313 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:22:37.000324 | orchestrator | 2025-09-23 07:22:37.000334 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-09-23 07:22:37.000345 | orchestrator | Tuesday 23 September 2025 07:22:21 +0000 (0:00:01.698) 0:06:55.675 ***** 2025-09-23 07:22:37.000356 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:22:37.000367 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:22:37.000378 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:22:37.000389 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:22:37.000399 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:22:37.000410 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:22:37.000420 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:22:37.000431 | orchestrator | 2025-09-23 07:22:37.000442 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-09-23 07:22:37.000453 | orchestrator | Tuesday 23 September 2025 07:22:22 +0000 (0:00:00.516) 0:06:56.191 ***** 2025-09-23 07:22:37.000464 | orchestrator | ok: [testbed-manager] 2025-09-23 07:22:37.000475 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:22:37.000486 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:22:37.000504 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:22:37.000515 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:22:37.000525 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:22:37.000536 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:22:37.000546 | orchestrator | 2025-09-23 07:22:37.000557 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-09-23 07:22:37.000568 | orchestrator | Tuesday 23 September 2025 07:22:31 +0000 (0:00:08.749) 0:07:04.941 ***** 2025-09-23 07:22:37.000579 | orchestrator | ok: [testbed-manager] 2025-09-23 07:22:37.000590 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:22:37.000601 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:22:37.000611 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:22:37.000622 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:22:37.000633 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:22:37.000644 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:22:37.000654 | orchestrator | 2025-09-23 07:22:37.000665 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-09-23 07:22:37.000676 | orchestrator | Tuesday 23 September 2025 07:22:32 +0000 (0:00:01.274) 0:07:06.215 ***** 2025-09-23 07:22:37.000687 | orchestrator | ok: [testbed-manager] 2025-09-23 07:22:37.000698 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:22:37.000709 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:22:37.000720 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:22:37.000731 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:22:37.000741 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:22:37.000752 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:22:37.000762 | orchestrator | 2025-09-23 07:22:37.000773 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-09-23 07:22:37.000784 | orchestrator | Tuesday 23 September 2025 07:22:34 +0000 (0:00:02.021) 0:07:08.237 ***** 2025-09-23 07:22:37.000795 | orchestrator | ok: [testbed-manager] 2025-09-23 07:22:37.000806 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:22:37.000817 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:22:37.000828 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:22:37.000839 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:22:37.000849 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:22:37.000860 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:22:37.000871 | orchestrator | 2025-09-23 07:22:37.000882 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-23 07:22:37.000897 | orchestrator | Tuesday 23 September 2025 07:22:36 +0000 (0:00:01.687) 0:07:09.924 ***** 2025-09-23 07:22:37.000909 | orchestrator | ok: [testbed-manager] 2025-09-23 07:22:37.000920 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:22:37.000931 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:22:37.000941 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:22:37.000959 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:23:08.767025 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:23:08.767169 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:23:08.767196 | orchestrator | 2025-09-23 07:23:08.767210 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-23 07:23:08.767231 | orchestrator | Tuesday 23 September 2025 07:22:36 +0000 (0:00:00.865) 0:07:10.789 ***** 2025-09-23 07:23:08.767253 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:23:08.767380 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:23:08.767401 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:23:08.767417 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:23:08.767433 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:23:08.767449 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:23:08.767466 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:23:08.767482 | orchestrator | 2025-09-23 07:23:08.767500 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-09-23 07:23:08.767641 | orchestrator | Tuesday 23 September 2025 07:22:37 +0000 (0:00:00.986) 0:07:11.776 ***** 2025-09-23 07:23:08.767665 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:23:08.767712 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:23:08.767725 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:23:08.767738 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:23:08.767750 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:23:08.767763 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:23:08.767776 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:23:08.767788 | orchestrator | 2025-09-23 07:23:08.767801 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-09-23 07:23:08.767815 | orchestrator | Tuesday 23 September 2025 07:22:38 +0000 (0:00:00.554) 0:07:12.331 ***** 2025-09-23 07:23:08.767827 | orchestrator | ok: [testbed-manager] 2025-09-23 07:23:08.767840 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:23:08.767853 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:23:08.767865 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:23:08.767878 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:23:08.767890 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:23:08.767902 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:23:08.767914 | orchestrator | 2025-09-23 07:23:08.767927 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-09-23 07:23:08.767940 | orchestrator | Tuesday 23 September 2025 07:22:39 +0000 (0:00:00.507) 0:07:12.838 ***** 2025-09-23 07:23:08.767951 | orchestrator | ok: [testbed-manager] 2025-09-23 07:23:08.767962 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:23:08.767973 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:23:08.767983 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:23:08.767994 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:23:08.768004 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:23:08.768015 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:23:08.768025 | orchestrator | 2025-09-23 07:23:08.768036 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-09-23 07:23:08.768047 | orchestrator | Tuesday 23 September 2025 07:22:39 +0000 (0:00:00.524) 0:07:13.363 ***** 2025-09-23 07:23:08.768058 | orchestrator | ok: [testbed-manager] 2025-09-23 07:23:08.768068 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:23:08.768079 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:23:08.768090 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:23:08.768100 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:23:08.768111 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:23:08.768122 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:23:08.768132 | orchestrator | 2025-09-23 07:23:08.768143 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-09-23 07:23:08.768154 | orchestrator | Tuesday 23 September 2025 07:22:40 +0000 (0:00:00.556) 0:07:13.919 ***** 2025-09-23 07:23:08.768165 | orchestrator | ok: [testbed-manager] 2025-09-23 07:23:08.768175 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:23:08.768186 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:23:08.768196 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:23:08.768207 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:23:08.768217 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:23:08.768228 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:23:08.768238 | orchestrator | 2025-09-23 07:23:08.768249 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-09-23 07:23:08.768286 | orchestrator | Tuesday 23 September 2025 07:22:46 +0000 (0:00:05.990) 0:07:19.910 ***** 2025-09-23 07:23:08.768307 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:23:08.768318 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:23:08.768329 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:23:08.768341 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:23:08.768351 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:23:08.768362 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:23:08.768372 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:23:08.768383 | orchestrator | 2025-09-23 07:23:08.768394 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-09-23 07:23:08.768405 | orchestrator | Tuesday 23 September 2025 07:22:46 +0000 (0:00:00.531) 0:07:20.441 ***** 2025-09-23 07:23:08.768427 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:23:08.768441 | orchestrator | 2025-09-23 07:23:08.768452 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-09-23 07:23:08.768462 | orchestrator | Tuesday 23 September 2025 07:22:47 +0000 (0:00:00.914) 0:07:21.355 ***** 2025-09-23 07:23:08.768473 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:23:08.768484 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:23:08.768494 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:23:08.768505 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:23:08.768515 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:23:08.768526 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:23:08.768537 | orchestrator | ok: [testbed-manager] 2025-09-23 07:23:08.768547 | orchestrator | 2025-09-23 07:23:08.768558 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-09-23 07:23:08.768569 | orchestrator | Tuesday 23 September 2025 07:22:49 +0000 (0:00:01.981) 0:07:23.337 ***** 2025-09-23 07:23:08.768580 | orchestrator | ok: [testbed-manager] 2025-09-23 07:23:08.768591 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:23:08.768601 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:23:08.768612 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:23:08.768622 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:23:08.768633 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:23:08.768644 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:23:08.768654 | orchestrator | 2025-09-23 07:23:08.768700 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-09-23 07:23:08.768712 | orchestrator | Tuesday 23 September 2025 07:22:50 +0000 (0:00:01.117) 0:07:24.454 ***** 2025-09-23 07:23:08.768723 | orchestrator | ok: [testbed-manager] 2025-09-23 07:23:08.768733 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:23:08.768744 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:23:08.768755 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:23:08.768765 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:23:08.768776 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:23:08.768787 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:23:08.768797 | orchestrator | 2025-09-23 07:23:08.768808 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-09-23 07:23:08.768818 | orchestrator | Tuesday 23 September 2025 07:22:51 +0000 (0:00:00.866) 0:07:25.320 ***** 2025-09-23 07:23:08.768842 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-23 07:23:08.768855 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-23 07:23:08.768866 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-23 07:23:08.768877 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-23 07:23:08.768888 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-23 07:23:08.768898 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-23 07:23:08.768909 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-23 07:23:08.768919 | orchestrator | 2025-09-23 07:23:08.768930 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-09-23 07:23:08.768941 | orchestrator | Tuesday 23 September 2025 07:22:53 +0000 (0:00:01.641) 0:07:26.962 ***** 2025-09-23 07:23:08.768959 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:23:08.768970 | orchestrator | 2025-09-23 07:23:08.768981 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-09-23 07:23:08.768991 | orchestrator | Tuesday 23 September 2025 07:22:54 +0000 (0:00:01.004) 0:07:27.966 ***** 2025-09-23 07:23:08.769002 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:23:08.769013 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:23:08.769023 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:23:08.769034 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:23:08.769044 | orchestrator | changed: [testbed-manager] 2025-09-23 07:23:08.769055 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:23:08.769065 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:23:08.769076 | orchestrator | 2025-09-23 07:23:08.769087 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-09-23 07:23:08.769098 | orchestrator | Tuesday 23 September 2025 07:23:03 +0000 (0:00:09.406) 0:07:37.373 ***** 2025-09-23 07:23:08.769108 | orchestrator | ok: [testbed-manager] 2025-09-23 07:23:08.769119 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:23:08.769129 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:23:08.769140 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:23:08.769150 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:23:08.769161 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:23:08.769171 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:23:08.769182 | orchestrator | 2025-09-23 07:23:08.769192 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-09-23 07:23:08.769203 | orchestrator | Tuesday 23 September 2025 07:23:05 +0000 (0:00:02.022) 0:07:39.395 ***** 2025-09-23 07:23:08.769213 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:23:08.769224 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:23:08.769234 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:23:08.769245 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:23:08.769255 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:23:08.769304 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:23:08.769322 | orchestrator | 2025-09-23 07:23:08.769340 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-09-23 07:23:08.769358 | orchestrator | Tuesday 23 September 2025 07:23:06 +0000 (0:00:01.350) 0:07:40.745 ***** 2025-09-23 07:23:08.769376 | orchestrator | changed: [testbed-manager] 2025-09-23 07:23:08.769393 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:23:08.769411 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:23:08.769426 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:23:08.769442 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:23:08.769459 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:23:08.769475 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:23:08.769492 | orchestrator | 2025-09-23 07:23:08.769509 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-09-23 07:23:08.769527 | orchestrator | 2025-09-23 07:23:08.769545 | orchestrator | TASK [Include hardening role] ************************************************** 2025-09-23 07:23:08.769563 | orchestrator | Tuesday 23 September 2025 07:23:08 +0000 (0:00:01.271) 0:07:42.017 ***** 2025-09-23 07:23:08.769581 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:23:08.769598 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:23:08.769630 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:23:08.769649 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:23:08.769668 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:23:08.769688 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:23:08.769720 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:23:36.242875 | orchestrator | 2025-09-23 07:23:36.242985 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-09-23 07:23:36.243000 | orchestrator | 2025-09-23 07:23:36.243012 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-09-23 07:23:36.243050 | orchestrator | Tuesday 23 September 2025 07:23:08 +0000 (0:00:00.542) 0:07:42.560 ***** 2025-09-23 07:23:36.243062 | orchestrator | changed: [testbed-manager] 2025-09-23 07:23:36.243074 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:23:36.243085 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:23:36.243096 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:23:36.243106 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:23:36.243117 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:23:36.243127 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:23:36.243138 | orchestrator | 2025-09-23 07:23:36.243149 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-09-23 07:23:36.243159 | orchestrator | Tuesday 23 September 2025 07:23:10 +0000 (0:00:01.562) 0:07:44.123 ***** 2025-09-23 07:23:36.243170 | orchestrator | ok: [testbed-manager] 2025-09-23 07:23:36.243182 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:23:36.243193 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:23:36.243203 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:23:36.243214 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:23:36.243224 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:23:36.243235 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:23:36.243246 | orchestrator | 2025-09-23 07:23:36.243305 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-09-23 07:23:36.243325 | orchestrator | Tuesday 23 September 2025 07:23:11 +0000 (0:00:01.492) 0:07:45.615 ***** 2025-09-23 07:23:36.243343 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:23:36.243357 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:23:36.243367 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:23:36.243380 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:23:36.243392 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:23:36.243404 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:23:36.243415 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:23:36.243428 | orchestrator | 2025-09-23 07:23:36.243441 | orchestrator | TASK [Include smartd role] ***************************************************** 2025-09-23 07:23:36.243454 | orchestrator | Tuesday 23 September 2025 07:23:12 +0000 (0:00:00.493) 0:07:46.108 ***** 2025-09-23 07:23:36.243467 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:23:36.243481 | orchestrator | 2025-09-23 07:23:36.243493 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-09-23 07:23:36.243505 | orchestrator | Tuesday 23 September 2025 07:23:13 +0000 (0:00:00.972) 0:07:47.081 ***** 2025-09-23 07:23:36.243520 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:23:36.243535 | orchestrator | 2025-09-23 07:23:36.243547 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-09-23 07:23:36.243560 | orchestrator | Tuesday 23 September 2025 07:23:14 +0000 (0:00:00.801) 0:07:47.882 ***** 2025-09-23 07:23:36.243571 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:23:36.243584 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:23:36.243602 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:23:36.243629 | orchestrator | changed: [testbed-manager] 2025-09-23 07:23:36.243652 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:23:36.243669 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:23:36.243686 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:23:36.243702 | orchestrator | 2025-09-23 07:23:36.243720 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-09-23 07:23:36.243737 | orchestrator | Tuesday 23 September 2025 07:23:22 +0000 (0:00:08.477) 0:07:56.359 ***** 2025-09-23 07:23:36.243754 | orchestrator | changed: [testbed-manager] 2025-09-23 07:23:36.243770 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:23:36.243803 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:23:36.243820 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:23:36.243837 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:23:36.243857 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:23:36.243875 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:23:36.243894 | orchestrator | 2025-09-23 07:23:36.243906 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-09-23 07:23:36.243916 | orchestrator | Tuesday 23 September 2025 07:23:23 +0000 (0:00:00.838) 0:07:57.198 ***** 2025-09-23 07:23:36.243927 | orchestrator | changed: [testbed-manager] 2025-09-23 07:23:36.243938 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:23:36.243949 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:23:36.243960 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:23:36.243970 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:23:36.243981 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:23:36.243992 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:23:36.244002 | orchestrator | 2025-09-23 07:23:36.244013 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-09-23 07:23:36.244023 | orchestrator | Tuesday 23 September 2025 07:23:24 +0000 (0:00:01.519) 0:07:58.718 ***** 2025-09-23 07:23:36.244034 | orchestrator | changed: [testbed-manager] 2025-09-23 07:23:36.244045 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:23:36.244055 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:23:36.244066 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:23:36.244076 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:23:36.244086 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:23:36.244097 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:23:36.244107 | orchestrator | 2025-09-23 07:23:36.244118 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-09-23 07:23:36.244129 | orchestrator | Tuesday 23 September 2025 07:23:27 +0000 (0:00:02.615) 0:08:01.333 ***** 2025-09-23 07:23:36.244155 | orchestrator | changed: [testbed-manager] 2025-09-23 07:23:36.244166 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:23:36.244177 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:23:36.244187 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:23:36.244218 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:23:36.244230 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:23:36.244240 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:23:36.244251 | orchestrator | 2025-09-23 07:23:36.244292 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-09-23 07:23:36.244303 | orchestrator | Tuesday 23 September 2025 07:23:28 +0000 (0:00:01.185) 0:08:02.519 ***** 2025-09-23 07:23:36.244314 | orchestrator | changed: [testbed-manager] 2025-09-23 07:23:36.244325 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:23:36.244335 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:23:36.244346 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:23:36.244356 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:23:36.244367 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:23:36.244377 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:23:36.244388 | orchestrator | 2025-09-23 07:23:36.244398 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-09-23 07:23:36.244409 | orchestrator | 2025-09-23 07:23:36.244420 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-09-23 07:23:36.244430 | orchestrator | Tuesday 23 September 2025 07:23:30 +0000 (0:00:01.416) 0:08:03.935 ***** 2025-09-23 07:23:36.244441 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:23:36.244452 | orchestrator | 2025-09-23 07:23:36.244463 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-23 07:23:36.244473 | orchestrator | Tuesday 23 September 2025 07:23:30 +0000 (0:00:00.839) 0:08:04.775 ***** 2025-09-23 07:23:36.244484 | orchestrator | ok: [testbed-manager] 2025-09-23 07:23:36.244495 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:23:36.244514 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:23:36.244525 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:23:36.244535 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:23:36.244546 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:23:36.244556 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:23:36.244567 | orchestrator | 2025-09-23 07:23:36.244577 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-23 07:23:36.244588 | orchestrator | Tuesday 23 September 2025 07:23:31 +0000 (0:00:00.821) 0:08:05.596 ***** 2025-09-23 07:23:36.244599 | orchestrator | changed: [testbed-manager] 2025-09-23 07:23:36.244609 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:23:36.244620 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:23:36.244630 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:23:36.244641 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:23:36.244651 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:23:36.244662 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:23:36.244672 | orchestrator | 2025-09-23 07:23:36.244683 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-09-23 07:23:36.244694 | orchestrator | Tuesday 23 September 2025 07:23:33 +0000 (0:00:01.346) 0:08:06.943 ***** 2025-09-23 07:23:36.244705 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:23:36.244716 | orchestrator | 2025-09-23 07:23:36.244726 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-23 07:23:36.244737 | orchestrator | Tuesday 23 September 2025 07:23:33 +0000 (0:00:00.841) 0:08:07.784 ***** 2025-09-23 07:23:36.244748 | orchestrator | ok: [testbed-manager] 2025-09-23 07:23:36.244758 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:23:36.244769 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:23:36.244779 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:23:36.244790 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:23:36.244801 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:23:36.244811 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:23:36.244822 | orchestrator | 2025-09-23 07:23:36.244832 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-23 07:23:36.244843 | orchestrator | Tuesday 23 September 2025 07:23:34 +0000 (0:00:00.926) 0:08:08.711 ***** 2025-09-23 07:23:36.244853 | orchestrator | changed: [testbed-manager] 2025-09-23 07:23:36.244864 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:23:36.244875 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:23:36.244885 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:23:36.244896 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:23:36.244906 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:23:36.244917 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:23:36.244927 | orchestrator | 2025-09-23 07:23:36.244938 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:23:36.244950 | orchestrator | testbed-manager : ok=164  changed=38  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2025-09-23 07:23:36.244961 | orchestrator | testbed-node-0 : ok=173  changed=67  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-09-23 07:23:36.244972 | orchestrator | testbed-node-1 : ok=173  changed=67  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-23 07:23:36.244983 | orchestrator | testbed-node-2 : ok=173  changed=67  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-23 07:23:36.244994 | orchestrator | testbed-node-3 : ok=171  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-09-23 07:23:36.245004 | orchestrator | testbed-node-4 : ok=171  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-09-23 07:23:36.245027 | orchestrator | testbed-node-5 : ok=171  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-09-23 07:23:36.245038 | orchestrator | 2025-09-23 07:23:36.245049 | orchestrator | 2025-09-23 07:23:36.245066 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:23:36.682164 | orchestrator | Tuesday 23 September 2025 07:23:36 +0000 (0:00:01.311) 0:08:10.022 ***** 2025-09-23 07:23:36.682341 | orchestrator | =============================================================================== 2025-09-23 07:23:36.682365 | orchestrator | osism.commons.packages : Install required packages --------------------- 79.63s 2025-09-23 07:23:36.682383 | orchestrator | osism.commons.packages : Download required packages -------------------- 38.49s 2025-09-23 07:23:36.682400 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.29s 2025-09-23 07:23:36.682417 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.87s 2025-09-23 07:23:36.682433 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.03s 2025-09-23 07:23:36.682501 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.34s 2025-09-23 07:23:36.682542 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.69s 2025-09-23 07:23:36.682559 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.72s 2025-09-23 07:23:36.682576 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.41s 2025-09-23 07:23:36.682592 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.75s 2025-09-23 07:23:36.682608 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.61s 2025-09-23 07:23:36.682624 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.48s 2025-09-23 07:23:36.682641 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.32s 2025-09-23 07:23:36.682659 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.25s 2025-09-23 07:23:36.682677 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.02s 2025-09-23 07:23:36.682693 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.69s 2025-09-23 07:23:36.682710 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.78s 2025-09-23 07:23:36.682726 | orchestrator | osism.commons.services : Populate service facts ------------------------- 6.55s 2025-09-23 07:23:36.682743 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.99s 2025-09-23 07:23:36.682759 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.76s 2025-09-23 07:23:36.968979 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-09-23 07:23:36.969087 | orchestrator | + osism apply network 2025-09-23 07:23:49.656341 | orchestrator | 2025-09-23 07:23:49 | INFO  | Task 2053c525-e9d3-4ad6-84f9-356d75bf08f7 (network) was prepared for execution. 2025-09-23 07:23:49.656445 | orchestrator | 2025-09-23 07:23:49 | INFO  | It takes a moment until task 2053c525-e9d3-4ad6-84f9-356d75bf08f7 (network) has been started and output is visible here. 2025-09-23 07:24:17.709451 | orchestrator | 2025-09-23 07:24:17.709550 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-09-23 07:24:17.709565 | orchestrator | 2025-09-23 07:24:17.709577 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-09-23 07:24:17.709588 | orchestrator | Tuesday 23 September 2025 07:23:53 +0000 (0:00:00.273) 0:00:00.273 ***** 2025-09-23 07:24:17.709599 | orchestrator | ok: [testbed-manager] 2025-09-23 07:24:17.709611 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:24:17.709621 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:24:17.709632 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:24:17.709643 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:24:17.709653 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:24:17.709664 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:24:17.709695 | orchestrator | 2025-09-23 07:24:17.709708 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-09-23 07:24:17.709719 | orchestrator | Tuesday 23 September 2025 07:23:54 +0000 (0:00:00.726) 0:00:01.000 ***** 2025-09-23 07:24:17.709731 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:24:17.709744 | orchestrator | 2025-09-23 07:24:17.709754 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-09-23 07:24:17.709765 | orchestrator | Tuesday 23 September 2025 07:23:55 +0000 (0:00:01.280) 0:00:02.280 ***** 2025-09-23 07:24:17.709776 | orchestrator | ok: [testbed-manager] 2025-09-23 07:24:17.709786 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:24:17.709797 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:24:17.709807 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:24:17.709817 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:24:17.709828 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:24:17.709838 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:24:17.709849 | orchestrator | 2025-09-23 07:24:17.709860 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-09-23 07:24:17.709871 | orchestrator | Tuesday 23 September 2025 07:23:57 +0000 (0:00:01.975) 0:00:04.256 ***** 2025-09-23 07:24:17.709881 | orchestrator | ok: [testbed-manager] 2025-09-23 07:24:17.709892 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:24:17.709902 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:24:17.709913 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:24:17.709923 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:24:17.709934 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:24:17.709944 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:24:17.709954 | orchestrator | 2025-09-23 07:24:17.709977 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-09-23 07:24:17.709989 | orchestrator | Tuesday 23 September 2025 07:23:59 +0000 (0:00:01.737) 0:00:05.994 ***** 2025-09-23 07:24:17.710003 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-09-23 07:24:17.710015 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-09-23 07:24:17.710094 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-09-23 07:24:17.710114 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-09-23 07:24:17.710136 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-09-23 07:24:17.710155 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-09-23 07:24:17.710172 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-09-23 07:24:17.710185 | orchestrator | 2025-09-23 07:24:17.710197 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-09-23 07:24:17.710210 | orchestrator | Tuesday 23 September 2025 07:24:00 +0000 (0:00:00.974) 0:00:06.968 ***** 2025-09-23 07:24:17.710222 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-23 07:24:17.710235 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-23 07:24:17.710267 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-23 07:24:17.710280 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-23 07:24:17.710292 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-23 07:24:17.710304 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-23 07:24:17.710317 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-23 07:24:17.710329 | orchestrator | 2025-09-23 07:24:17.710341 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-09-23 07:24:17.710352 | orchestrator | Tuesday 23 September 2025 07:24:03 +0000 (0:00:03.317) 0:00:10.286 ***** 2025-09-23 07:24:17.710363 | orchestrator | changed: [testbed-manager] 2025-09-23 07:24:17.710373 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:24:17.710384 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:24:17.710394 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:24:17.710405 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:24:17.710425 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:24:17.710436 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:24:17.710446 | orchestrator | 2025-09-23 07:24:17.710457 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-09-23 07:24:17.710467 | orchestrator | Tuesday 23 September 2025 07:24:05 +0000 (0:00:01.646) 0:00:11.932 ***** 2025-09-23 07:24:17.710478 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-23 07:24:17.710489 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-23 07:24:17.710499 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-23 07:24:17.710510 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-23 07:24:17.710520 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-23 07:24:17.710531 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-23 07:24:17.710541 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-23 07:24:17.710552 | orchestrator | 2025-09-23 07:24:17.710562 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-09-23 07:24:17.710573 | orchestrator | Tuesday 23 September 2025 07:24:07 +0000 (0:00:01.841) 0:00:13.773 ***** 2025-09-23 07:24:17.710584 | orchestrator | ok: [testbed-manager] 2025-09-23 07:24:17.710594 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:24:17.710605 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:24:17.710615 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:24:17.710626 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:24:17.710636 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:24:17.710647 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:24:17.710657 | orchestrator | 2025-09-23 07:24:17.710668 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-09-23 07:24:17.710698 | orchestrator | Tuesday 23 September 2025 07:24:08 +0000 (0:00:01.076) 0:00:14.850 ***** 2025-09-23 07:24:17.710709 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:24:17.710720 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:24:17.710730 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:24:17.710741 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:24:17.710752 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:24:17.710762 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:24:17.710773 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:24:17.710783 | orchestrator | 2025-09-23 07:24:17.710794 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-09-23 07:24:17.710805 | orchestrator | Tuesday 23 September 2025 07:24:09 +0000 (0:00:00.701) 0:00:15.551 ***** 2025-09-23 07:24:17.710816 | orchestrator | ok: [testbed-manager] 2025-09-23 07:24:17.710827 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:24:17.710837 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:24:17.710848 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:24:17.710858 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:24:17.710869 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:24:17.710879 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:24:17.710890 | orchestrator | 2025-09-23 07:24:17.710901 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-09-23 07:24:17.710912 | orchestrator | Tuesday 23 September 2025 07:24:11 +0000 (0:00:02.061) 0:00:17.613 ***** 2025-09-23 07:24:17.710923 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:24:17.710933 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:24:17.710944 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:24:17.710954 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:24:17.710965 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:24:17.710975 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:24:17.710986 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-09-23 07:24:17.710998 | orchestrator | 2025-09-23 07:24:17.711009 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-09-23 07:24:17.711019 | orchestrator | Tuesday 23 September 2025 07:24:12 +0000 (0:00:00.876) 0:00:18.490 ***** 2025-09-23 07:24:17.711030 | orchestrator | ok: [testbed-manager] 2025-09-23 07:24:17.711047 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:24:17.711058 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:24:17.711068 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:24:17.711079 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:24:17.711090 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:24:17.711101 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:24:17.711111 | orchestrator | 2025-09-23 07:24:17.711122 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-09-23 07:24:17.711133 | orchestrator | Tuesday 23 September 2025 07:24:13 +0000 (0:00:01.673) 0:00:20.164 ***** 2025-09-23 07:24:17.711144 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:24:17.711156 | orchestrator | 2025-09-23 07:24:17.711167 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-23 07:24:17.711178 | orchestrator | Tuesday 23 September 2025 07:24:15 +0000 (0:00:01.214) 0:00:21.378 ***** 2025-09-23 07:24:17.711189 | orchestrator | ok: [testbed-manager] 2025-09-23 07:24:17.711199 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:24:17.711210 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:24:17.711221 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:24:17.711231 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:24:17.711242 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:24:17.711269 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:24:17.711280 | orchestrator | 2025-09-23 07:24:17.711290 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-09-23 07:24:17.711301 | orchestrator | Tuesday 23 September 2025 07:24:15 +0000 (0:00:00.888) 0:00:22.267 ***** 2025-09-23 07:24:17.711312 | orchestrator | ok: [testbed-manager] 2025-09-23 07:24:17.711322 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:24:17.711333 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:24:17.711343 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:24:17.711354 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:24:17.711364 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:24:17.711375 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:24:17.711385 | orchestrator | 2025-09-23 07:24:17.711396 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-23 07:24:17.711407 | orchestrator | Tuesday 23 September 2025 07:24:16 +0000 (0:00:00.694) 0:00:22.961 ***** 2025-09-23 07:24:17.711418 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-09-23 07:24:17.711428 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-09-23 07:24:17.711439 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-09-23 07:24:17.711449 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-09-23 07:24:17.711460 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-23 07:24:17.711471 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-09-23 07:24:17.711481 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-09-23 07:24:17.711492 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-23 07:24:17.711503 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-23 07:24:17.711514 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-09-23 07:24:17.711524 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-23 07:24:17.711535 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-23 07:24:17.711545 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-23 07:24:17.711556 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-23 07:24:17.711567 | orchestrator | 2025-09-23 07:24:17.711585 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-09-23 07:24:33.466904 | orchestrator | Tuesday 23 September 2025 07:24:17 +0000 (0:00:01.055) 0:00:24.017 ***** 2025-09-23 07:24:33.467033 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:24:33.467050 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:24:33.467061 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:24:33.467072 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:24:33.467083 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:24:33.467094 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:24:33.467105 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:24:33.467116 | orchestrator | 2025-09-23 07:24:33.467127 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-09-23 07:24:33.467138 | orchestrator | Tuesday 23 September 2025 07:24:18 +0000 (0:00:00.569) 0:00:24.587 ***** 2025-09-23 07:24:33.467150 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-node-1, testbed-manager, testbed-node-2, testbed-node-4, testbed-node-3, testbed-node-5 2025-09-23 07:24:33.467164 | orchestrator | 2025-09-23 07:24:33.467175 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-09-23 07:24:33.467186 | orchestrator | Tuesday 23 September 2025 07:24:22 +0000 (0:00:04.221) 0:00:28.808 ***** 2025-09-23 07:24:33.467198 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-23 07:24:33.467212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-23 07:24:33.467268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-23 07:24:33.467282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-23 07:24:33.467293 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-23 07:24:33.467304 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-23 07:24:33.467315 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-23 07:24:33.467326 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-23 07:24:33.467344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-23 07:24:33.467355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-23 07:24:33.467390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-23 07:24:33.467418 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-23 07:24:33.467430 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-23 07:24:33.467441 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-23 07:24:33.467454 | orchestrator | 2025-09-23 07:24:33.467466 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-09-23 07:24:33.467479 | orchestrator | Tuesday 23 September 2025 07:24:27 +0000 (0:00:05.303) 0:00:34.111 ***** 2025-09-23 07:24:33.467491 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-23 07:24:33.467504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-23 07:24:33.467516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-23 07:24:33.467534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-23 07:24:33.467547 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-23 07:24:33.467560 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-23 07:24:33.467573 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-23 07:24:33.467586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-23 07:24:33.467606 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-23 07:24:33.467619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-23 07:24:33.467632 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-23 07:24:33.467644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-23 07:24:33.467669 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-23 07:24:39.711526 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-23 07:24:39.711633 | orchestrator | 2025-09-23 07:24:39.711649 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-09-23 07:24:39.711662 | orchestrator | Tuesday 23 September 2025 07:24:33 +0000 (0:00:05.657) 0:00:39.769 ***** 2025-09-23 07:24:39.711675 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:24:39.711687 | orchestrator | 2025-09-23 07:24:39.711698 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-23 07:24:39.711709 | orchestrator | Tuesday 23 September 2025 07:24:34 +0000 (0:00:01.293) 0:00:41.063 ***** 2025-09-23 07:24:39.711720 | orchestrator | ok: [testbed-manager] 2025-09-23 07:24:39.711732 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:24:39.711743 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:24:39.711754 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:24:39.711764 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:24:39.711775 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:24:39.711785 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:24:39.711796 | orchestrator | 2025-09-23 07:24:39.711807 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-23 07:24:39.711818 | orchestrator | Tuesday 23 September 2025 07:24:35 +0000 (0:00:01.212) 0:00:42.275 ***** 2025-09-23 07:24:39.711829 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-23 07:24:39.711841 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-23 07:24:39.711852 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-23 07:24:39.711880 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-23 07:24:39.711892 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-23 07:24:39.711909 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-23 07:24:39.711928 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-23 07:24:39.711975 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-23 07:24:39.711996 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:24:39.712015 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-23 07:24:39.712032 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-23 07:24:39.712043 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-23 07:24:39.712054 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-23 07:24:39.712067 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:24:39.712079 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-23 07:24:39.712091 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-23 07:24:39.712104 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-23 07:24:39.712116 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-23 07:24:39.712128 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:24:39.712142 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-23 07:24:39.712155 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-23 07:24:39.712167 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-23 07:24:39.712179 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-23 07:24:39.712192 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:24:39.712204 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-23 07:24:39.712216 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-23 07:24:39.712228 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-23 07:24:39.712271 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-23 07:24:39.712286 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:24:39.712299 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:24:39.712311 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-23 07:24:39.712323 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-23 07:24:39.712335 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-23 07:24:39.712348 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-23 07:24:39.712361 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:24:39.712373 | orchestrator | 2025-09-23 07:24:39.712386 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-09-23 07:24:39.712416 | orchestrator | Tuesday 23 September 2025 07:24:37 +0000 (0:00:02.050) 0:00:44.325 ***** 2025-09-23 07:24:39.712428 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:24:39.712439 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:24:39.712449 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:24:39.712460 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:24:39.712470 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:24:39.712481 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:24:39.712492 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:24:39.712502 | orchestrator | 2025-09-23 07:24:39.712513 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-09-23 07:24:39.712524 | orchestrator | Tuesday 23 September 2025 07:24:38 +0000 (0:00:00.619) 0:00:44.945 ***** 2025-09-23 07:24:39.712534 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:24:39.712551 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:24:39.712571 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:24:39.712602 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:24:39.712622 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:24:39.712641 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:24:39.712660 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:24:39.712678 | orchestrator | 2025-09-23 07:24:39.712698 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:24:39.712718 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-23 07:24:39.712741 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-23 07:24:39.712762 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-23 07:24:39.712781 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-23 07:24:39.712813 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-23 07:24:39.712833 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-23 07:24:39.712845 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-23 07:24:39.712855 | orchestrator | 2025-09-23 07:24:39.712867 | orchestrator | 2025-09-23 07:24:39.712878 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:24:39.712889 | orchestrator | Tuesday 23 September 2025 07:24:39 +0000 (0:00:00.696) 0:00:45.642 ***** 2025-09-23 07:24:39.712899 | orchestrator | =============================================================================== 2025-09-23 07:24:39.712910 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.66s 2025-09-23 07:24:39.712921 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.30s 2025-09-23 07:24:39.712931 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.22s 2025-09-23 07:24:39.712942 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.32s 2025-09-23 07:24:39.712957 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.06s 2025-09-23 07:24:39.712975 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.05s 2025-09-23 07:24:39.712994 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.98s 2025-09-23 07:24:39.713012 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.84s 2025-09-23 07:24:39.713031 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.74s 2025-09-23 07:24:39.713052 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.67s 2025-09-23 07:24:39.713069 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.65s 2025-09-23 07:24:39.713087 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.29s 2025-09-23 07:24:39.713099 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.28s 2025-09-23 07:24:39.713109 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.21s 2025-09-23 07:24:39.713120 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.21s 2025-09-23 07:24:39.713130 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.08s 2025-09-23 07:24:39.713141 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.06s 2025-09-23 07:24:39.713151 | orchestrator | osism.commons.network : Create required directories --------------------- 0.97s 2025-09-23 07:24:39.713171 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.89s 2025-09-23 07:24:39.713182 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.88s 2025-09-23 07:24:40.012852 | orchestrator | + osism apply wireguard 2025-09-23 07:24:51.977030 | orchestrator | 2025-09-23 07:24:51 | INFO  | Task 1fed9019-9fa2-4ee7-af6e-f2b62eeab4dc (wireguard) was prepared for execution. 2025-09-23 07:24:51.977116 | orchestrator | 2025-09-23 07:24:51 | INFO  | It takes a moment until task 1fed9019-9fa2-4ee7-af6e-f2b62eeab4dc (wireguard) has been started and output is visible here. 2025-09-23 07:25:11.800299 | orchestrator | 2025-09-23 07:25:11.800384 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-09-23 07:25:11.800392 | orchestrator | 2025-09-23 07:25:11.800396 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-09-23 07:25:11.800401 | orchestrator | Tuesday 23 September 2025 07:24:56 +0000 (0:00:00.230) 0:00:00.230 ***** 2025-09-23 07:25:11.800405 | orchestrator | ok: [testbed-manager] 2025-09-23 07:25:11.800410 | orchestrator | 2025-09-23 07:25:11.800414 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-09-23 07:25:11.800418 | orchestrator | Tuesday 23 September 2025 07:24:57 +0000 (0:00:01.615) 0:00:01.845 ***** 2025-09-23 07:25:11.800422 | orchestrator | changed: [testbed-manager] 2025-09-23 07:25:11.800427 | orchestrator | 2025-09-23 07:25:11.800431 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-09-23 07:25:11.800435 | orchestrator | Tuesday 23 September 2025 07:25:04 +0000 (0:00:06.742) 0:00:08.588 ***** 2025-09-23 07:25:11.800439 | orchestrator | changed: [testbed-manager] 2025-09-23 07:25:11.800442 | orchestrator | 2025-09-23 07:25:11.800446 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-09-23 07:25:11.800450 | orchestrator | Tuesday 23 September 2025 07:25:05 +0000 (0:00:00.621) 0:00:09.210 ***** 2025-09-23 07:25:11.800453 | orchestrator | changed: [testbed-manager] 2025-09-23 07:25:11.800457 | orchestrator | 2025-09-23 07:25:11.800461 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-09-23 07:25:11.800465 | orchestrator | Tuesday 23 September 2025 07:25:05 +0000 (0:00:00.437) 0:00:09.648 ***** 2025-09-23 07:25:11.800468 | orchestrator | ok: [testbed-manager] 2025-09-23 07:25:11.800472 | orchestrator | 2025-09-23 07:25:11.800476 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-09-23 07:25:11.800479 | orchestrator | Tuesday 23 September 2025 07:25:05 +0000 (0:00:00.543) 0:00:10.191 ***** 2025-09-23 07:25:11.800483 | orchestrator | ok: [testbed-manager] 2025-09-23 07:25:11.800487 | orchestrator | 2025-09-23 07:25:11.800491 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-09-23 07:25:11.800495 | orchestrator | Tuesday 23 September 2025 07:25:06 +0000 (0:00:00.514) 0:00:10.706 ***** 2025-09-23 07:25:11.800498 | orchestrator | ok: [testbed-manager] 2025-09-23 07:25:11.800502 | orchestrator | 2025-09-23 07:25:11.800518 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-09-23 07:25:11.800522 | orchestrator | Tuesday 23 September 2025 07:25:06 +0000 (0:00:00.381) 0:00:11.087 ***** 2025-09-23 07:25:11.800526 | orchestrator | changed: [testbed-manager] 2025-09-23 07:25:11.800530 | orchestrator | 2025-09-23 07:25:11.800534 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-09-23 07:25:11.800537 | orchestrator | Tuesday 23 September 2025 07:25:07 +0000 (0:00:01.088) 0:00:12.176 ***** 2025-09-23 07:25:11.800541 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-23 07:25:11.800545 | orchestrator | changed: [testbed-manager] 2025-09-23 07:25:11.800549 | orchestrator | 2025-09-23 07:25:11.800553 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-09-23 07:25:11.800556 | orchestrator | Tuesday 23 September 2025 07:25:08 +0000 (0:00:00.838) 0:00:13.014 ***** 2025-09-23 07:25:11.800560 | orchestrator | changed: [testbed-manager] 2025-09-23 07:25:11.800564 | orchestrator | 2025-09-23 07:25:11.800568 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-09-23 07:25:11.800587 | orchestrator | Tuesday 23 September 2025 07:25:10 +0000 (0:00:01.547) 0:00:14.561 ***** 2025-09-23 07:25:11.800591 | orchestrator | changed: [testbed-manager] 2025-09-23 07:25:11.800595 | orchestrator | 2025-09-23 07:25:11.800599 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:25:11.800603 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:25:11.800608 | orchestrator | 2025-09-23 07:25:11.800612 | orchestrator | 2025-09-23 07:25:11.800615 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:25:11.800619 | orchestrator | Tuesday 23 September 2025 07:25:11 +0000 (0:00:00.947) 0:00:15.508 ***** 2025-09-23 07:25:11.800623 | orchestrator | =============================================================================== 2025-09-23 07:25:11.800626 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.74s 2025-09-23 07:25:11.800630 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.62s 2025-09-23 07:25:11.800634 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.55s 2025-09-23 07:25:11.800638 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.09s 2025-09-23 07:25:11.800642 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.95s 2025-09-23 07:25:11.800646 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.84s 2025-09-23 07:25:11.800649 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.62s 2025-09-23 07:25:11.800653 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.54s 2025-09-23 07:25:11.800657 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.51s 2025-09-23 07:25:11.800661 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.44s 2025-09-23 07:25:11.800664 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.38s 2025-09-23 07:25:12.094849 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-09-23 07:25:12.138666 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-09-23 07:25:12.138763 | orchestrator | Dload Upload Total Spent Left Speed 2025-09-23 07:25:12.216675 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 193 0 --:--:-- --:--:-- --:--:-- 194 2025-09-23 07:25:12.232630 | orchestrator | + osism apply --environment custom workarounds 2025-09-23 07:25:14.203105 | orchestrator | 2025-09-23 07:25:14 | INFO  | Trying to run play workarounds in environment custom 2025-09-23 07:25:24.303703 | orchestrator | 2025-09-23 07:25:24 | INFO  | Task 1e54db73-d792-4cdc-8cf4-789cf7b95bb3 (workarounds) was prepared for execution. 2025-09-23 07:25:24.303816 | orchestrator | 2025-09-23 07:25:24 | INFO  | It takes a moment until task 1e54db73-d792-4cdc-8cf4-789cf7b95bb3 (workarounds) has been started and output is visible here. 2025-09-23 07:25:49.098500 | orchestrator | 2025-09-23 07:25:49.098579 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-23 07:25:49.098587 | orchestrator | 2025-09-23 07:25:49.098592 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-09-23 07:25:49.098597 | orchestrator | Tuesday 23 September 2025 07:25:28 +0000 (0:00:00.156) 0:00:00.156 ***** 2025-09-23 07:25:49.098601 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-09-23 07:25:49.098606 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-09-23 07:25:49.098610 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-09-23 07:25:49.098614 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-09-23 07:25:49.098618 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-09-23 07:25:49.098638 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-09-23 07:25:49.098642 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-09-23 07:25:49.098646 | orchestrator | 2025-09-23 07:25:49.098650 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-09-23 07:25:49.098654 | orchestrator | 2025-09-23 07:25:49.098658 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-23 07:25:49.098662 | orchestrator | Tuesday 23 September 2025 07:25:29 +0000 (0:00:00.795) 0:00:00.951 ***** 2025-09-23 07:25:49.098666 | orchestrator | ok: [testbed-manager] 2025-09-23 07:25:49.098671 | orchestrator | 2025-09-23 07:25:49.098685 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-09-23 07:25:49.098689 | orchestrator | 2025-09-23 07:25:49.098693 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-23 07:25:49.098696 | orchestrator | Tuesday 23 September 2025 07:25:31 +0000 (0:00:02.304) 0:00:03.256 ***** 2025-09-23 07:25:49.098700 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:25:49.098704 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:25:49.098708 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:25:49.098712 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:25:49.098716 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:25:49.098719 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:25:49.098723 | orchestrator | 2025-09-23 07:25:49.098727 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-09-23 07:25:49.098731 | orchestrator | 2025-09-23 07:25:49.098734 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-09-23 07:25:49.098738 | orchestrator | Tuesday 23 September 2025 07:25:33 +0000 (0:00:01.968) 0:00:05.224 ***** 2025-09-23 07:25:49.098743 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-23 07:25:49.098748 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-23 07:25:49.098752 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-23 07:25:49.098756 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-23 07:25:49.098759 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-23 07:25:49.098763 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-23 07:25:49.098767 | orchestrator | 2025-09-23 07:25:49.098770 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-09-23 07:25:49.098774 | orchestrator | Tuesday 23 September 2025 07:25:35 +0000 (0:00:01.552) 0:00:06.777 ***** 2025-09-23 07:25:49.098778 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:25:49.098782 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:25:49.098786 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:25:49.098790 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:25:49.098793 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:25:49.098797 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:25:49.098801 | orchestrator | 2025-09-23 07:25:49.098804 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-09-23 07:25:49.098808 | orchestrator | Tuesday 23 September 2025 07:25:38 +0000 (0:00:03.856) 0:00:10.633 ***** 2025-09-23 07:25:49.098812 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:25:49.098816 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:25:49.098819 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:25:49.098823 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:25:49.098827 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:25:49.098831 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:25:49.098834 | orchestrator | 2025-09-23 07:25:49.098838 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-09-23 07:25:49.098845 | orchestrator | 2025-09-23 07:25:49.098849 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-09-23 07:25:49.098853 | orchestrator | Tuesday 23 September 2025 07:25:39 +0000 (0:00:00.559) 0:00:11.193 ***** 2025-09-23 07:25:49.098857 | orchestrator | changed: [testbed-manager] 2025-09-23 07:25:49.098860 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:25:49.098864 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:25:49.098868 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:25:49.098872 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:25:49.098876 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:25:49.098880 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:25:49.098883 | orchestrator | 2025-09-23 07:25:49.098887 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-09-23 07:25:49.098891 | orchestrator | Tuesday 23 September 2025 07:25:40 +0000 (0:00:01.443) 0:00:12.636 ***** 2025-09-23 07:25:49.098894 | orchestrator | changed: [testbed-manager] 2025-09-23 07:25:49.098898 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:25:49.098902 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:25:49.098906 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:25:49.098909 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:25:49.098913 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:25:49.098928 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:25:49.098932 | orchestrator | 2025-09-23 07:25:49.098935 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-09-23 07:25:49.098939 | orchestrator | Tuesday 23 September 2025 07:25:42 +0000 (0:00:01.504) 0:00:14.140 ***** 2025-09-23 07:25:49.098943 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:25:49.098947 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:25:49.098951 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:25:49.098954 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:25:49.098958 | orchestrator | ok: [testbed-manager] 2025-09-23 07:25:49.098962 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:25:49.098965 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:25:49.098969 | orchestrator | 2025-09-23 07:25:49.098973 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-09-23 07:25:49.098977 | orchestrator | Tuesday 23 September 2025 07:25:43 +0000 (0:00:01.548) 0:00:15.689 ***** 2025-09-23 07:25:49.098980 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:25:49.098984 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:25:49.098988 | orchestrator | changed: [testbed-manager] 2025-09-23 07:25:49.098992 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:25:49.098995 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:25:49.098999 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:25:49.099003 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:25:49.099007 | orchestrator | 2025-09-23 07:25:49.099010 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-09-23 07:25:49.099014 | orchestrator | Tuesday 23 September 2025 07:25:45 +0000 (0:00:01.740) 0:00:17.429 ***** 2025-09-23 07:25:49.099018 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:25:49.099022 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:25:49.099026 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:25:49.099029 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:25:49.099033 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:25:49.099037 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:25:49.099041 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:25:49.099044 | orchestrator | 2025-09-23 07:25:49.099048 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-09-23 07:25:49.099052 | orchestrator | 2025-09-23 07:25:49.099056 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-09-23 07:25:49.099060 | orchestrator | Tuesday 23 September 2025 07:25:46 +0000 (0:00:00.607) 0:00:18.036 ***** 2025-09-23 07:25:49.099063 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:25:49.099067 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:25:49.099074 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:25:49.099081 | orchestrator | ok: [testbed-manager] 2025-09-23 07:25:49.099086 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:25:49.099090 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:25:49.099095 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:25:49.099099 | orchestrator | 2025-09-23 07:25:49.099103 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:25:49.099109 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-23 07:25:49.099114 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 07:25:49.099119 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 07:25:49.099124 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 07:25:49.099128 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 07:25:49.099132 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 07:25:49.099137 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 07:25:49.099141 | orchestrator | 2025-09-23 07:25:49.099146 | orchestrator | 2025-09-23 07:25:49.099150 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:25:49.099155 | orchestrator | Tuesday 23 September 2025 07:25:49 +0000 (0:00:02.724) 0:00:20.761 ***** 2025-09-23 07:25:49.099164 | orchestrator | =============================================================================== 2025-09-23 07:25:49.099168 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.86s 2025-09-23 07:25:49.099173 | orchestrator | Install python3-docker -------------------------------------------------- 2.72s 2025-09-23 07:25:49.099178 | orchestrator | Apply netplan configuration --------------------------------------------- 2.30s 2025-09-23 07:25:49.099182 | orchestrator | Apply netplan configuration --------------------------------------------- 1.97s 2025-09-23 07:25:49.099186 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.74s 2025-09-23 07:25:49.099191 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.55s 2025-09-23 07:25:49.099195 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.55s 2025-09-23 07:25:49.099199 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.50s 2025-09-23 07:25:49.099204 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.44s 2025-09-23 07:25:49.099208 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.80s 2025-09-23 07:25:49.099213 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.61s 2025-09-23 07:25:49.099220 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.56s 2025-09-23 07:25:49.748332 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-09-23 07:26:01.789722 | orchestrator | 2025-09-23 07:26:01 | INFO  | Task adf4c79c-5ce4-4991-b2da-dd36a1c4543e (reboot) was prepared for execution. 2025-09-23 07:26:01.789833 | orchestrator | 2025-09-23 07:26:01 | INFO  | It takes a moment until task adf4c79c-5ce4-4991-b2da-dd36a1c4543e (reboot) has been started and output is visible here. 2025-09-23 07:26:11.952721 | orchestrator | 2025-09-23 07:26:11.952846 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-23 07:26:11.952874 | orchestrator | 2025-09-23 07:26:11.952920 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-23 07:26:11.952938 | orchestrator | Tuesday 23 September 2025 07:26:05 +0000 (0:00:00.216) 0:00:00.216 ***** 2025-09-23 07:26:11.952957 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:26:11.952976 | orchestrator | 2025-09-23 07:26:11.952995 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-23 07:26:11.953014 | orchestrator | Tuesday 23 September 2025 07:26:06 +0000 (0:00:00.113) 0:00:00.330 ***** 2025-09-23 07:26:11.953032 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:26:11.953049 | orchestrator | 2025-09-23 07:26:11.953060 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-23 07:26:11.953087 | orchestrator | Tuesday 23 September 2025 07:26:06 +0000 (0:00:00.954) 0:00:01.284 ***** 2025-09-23 07:26:11.953098 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:26:11.953108 | orchestrator | 2025-09-23 07:26:11.953119 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-23 07:26:11.953129 | orchestrator | 2025-09-23 07:26:11.953140 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-23 07:26:11.953150 | orchestrator | Tuesday 23 September 2025 07:26:07 +0000 (0:00:00.128) 0:00:01.413 ***** 2025-09-23 07:26:11.953161 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:26:11.953172 | orchestrator | 2025-09-23 07:26:11.953182 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-23 07:26:11.953193 | orchestrator | Tuesday 23 September 2025 07:26:07 +0000 (0:00:00.115) 0:00:01.528 ***** 2025-09-23 07:26:11.953203 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:26:11.953214 | orchestrator | 2025-09-23 07:26:11.953274 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-23 07:26:11.953287 | orchestrator | Tuesday 23 September 2025 07:26:07 +0000 (0:00:00.674) 0:00:02.203 ***** 2025-09-23 07:26:11.953299 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:26:11.953311 | orchestrator | 2025-09-23 07:26:11.953323 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-23 07:26:11.953335 | orchestrator | 2025-09-23 07:26:11.953347 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-23 07:26:11.953358 | orchestrator | Tuesday 23 September 2025 07:26:08 +0000 (0:00:00.122) 0:00:02.326 ***** 2025-09-23 07:26:11.953370 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:26:11.953382 | orchestrator | 2025-09-23 07:26:11.953394 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-23 07:26:11.953405 | orchestrator | Tuesday 23 September 2025 07:26:08 +0000 (0:00:00.227) 0:00:02.554 ***** 2025-09-23 07:26:11.953417 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:26:11.953429 | orchestrator | 2025-09-23 07:26:11.953441 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-23 07:26:11.953454 | orchestrator | Tuesday 23 September 2025 07:26:08 +0000 (0:00:00.654) 0:00:03.208 ***** 2025-09-23 07:26:11.953466 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:26:11.953478 | orchestrator | 2025-09-23 07:26:11.953490 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-23 07:26:11.953502 | orchestrator | 2025-09-23 07:26:11.953514 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-23 07:26:11.953526 | orchestrator | Tuesday 23 September 2025 07:26:09 +0000 (0:00:00.113) 0:00:03.322 ***** 2025-09-23 07:26:11.953538 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:26:11.953550 | orchestrator | 2025-09-23 07:26:11.953562 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-23 07:26:11.953574 | orchestrator | Tuesday 23 September 2025 07:26:09 +0000 (0:00:00.101) 0:00:03.424 ***** 2025-09-23 07:26:11.953586 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:26:11.953598 | orchestrator | 2025-09-23 07:26:11.953610 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-23 07:26:11.953622 | orchestrator | Tuesday 23 September 2025 07:26:09 +0000 (0:00:00.669) 0:00:04.094 ***** 2025-09-23 07:26:11.953644 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:26:11.953656 | orchestrator | 2025-09-23 07:26:11.953668 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-23 07:26:11.953680 | orchestrator | 2025-09-23 07:26:11.953691 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-23 07:26:11.953701 | orchestrator | Tuesday 23 September 2025 07:26:09 +0000 (0:00:00.118) 0:00:04.213 ***** 2025-09-23 07:26:11.953712 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:26:11.953722 | orchestrator | 2025-09-23 07:26:11.953733 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-23 07:26:11.953743 | orchestrator | Tuesday 23 September 2025 07:26:10 +0000 (0:00:00.117) 0:00:04.330 ***** 2025-09-23 07:26:11.953754 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:26:11.953764 | orchestrator | 2025-09-23 07:26:11.953775 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-23 07:26:11.953785 | orchestrator | Tuesday 23 September 2025 07:26:10 +0000 (0:00:00.650) 0:00:04.981 ***** 2025-09-23 07:26:11.953796 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:26:11.953806 | orchestrator | 2025-09-23 07:26:11.953816 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-23 07:26:11.953827 | orchestrator | 2025-09-23 07:26:11.953837 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-23 07:26:11.953848 | orchestrator | Tuesday 23 September 2025 07:26:10 +0000 (0:00:00.109) 0:00:05.090 ***** 2025-09-23 07:26:11.953858 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:26:11.953869 | orchestrator | 2025-09-23 07:26:11.953879 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-23 07:26:11.953890 | orchestrator | Tuesday 23 September 2025 07:26:10 +0000 (0:00:00.104) 0:00:05.194 ***** 2025-09-23 07:26:11.953900 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:26:11.953910 | orchestrator | 2025-09-23 07:26:11.953921 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-23 07:26:11.953932 | orchestrator | Tuesday 23 September 2025 07:26:11 +0000 (0:00:00.704) 0:00:05.899 ***** 2025-09-23 07:26:11.953961 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:26:11.953973 | orchestrator | 2025-09-23 07:26:11.953983 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:26:11.953995 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 07:26:11.954007 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 07:26:11.954071 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 07:26:11.954091 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 07:26:11.954102 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 07:26:11.954113 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 07:26:11.954124 | orchestrator | 2025-09-23 07:26:11.954134 | orchestrator | 2025-09-23 07:26:11.954179 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:26:11.954191 | orchestrator | Tuesday 23 September 2025 07:26:11 +0000 (0:00:00.036) 0:00:05.935 ***** 2025-09-23 07:26:11.954202 | orchestrator | =============================================================================== 2025-09-23 07:26:11.954213 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.31s 2025-09-23 07:26:11.954249 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.78s 2025-09-23 07:26:11.954268 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.63s 2025-09-23 07:26:12.245983 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-09-23 07:26:24.275695 | orchestrator | 2025-09-23 07:26:24 | INFO  | Task efbd253c-1b52-4e09-93a2-ed90d6d2852e (wait-for-connection) was prepared for execution. 2025-09-23 07:26:24.275800 | orchestrator | 2025-09-23 07:26:24 | INFO  | It takes a moment until task efbd253c-1b52-4e09-93a2-ed90d6d2852e (wait-for-connection) has been started and output is visible here. 2025-09-23 07:26:40.214135 | orchestrator | 2025-09-23 07:26:40.214343 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-09-23 07:26:40.214372 | orchestrator | 2025-09-23 07:26:40.214392 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-09-23 07:26:40.214413 | orchestrator | Tuesday 23 September 2025 07:26:28 +0000 (0:00:00.239) 0:00:00.239 ***** 2025-09-23 07:26:40.214433 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:26:40.214456 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:26:40.214476 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:26:40.214497 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:26:40.214516 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:26:40.214535 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:26:40.214554 | orchestrator | 2025-09-23 07:26:40.214574 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:26:40.214595 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:26:40.214619 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:26:40.214640 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:26:40.214660 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:26:40.214681 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:26:40.214704 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:26:40.214726 | orchestrator | 2025-09-23 07:26:40.214749 | orchestrator | 2025-09-23 07:26:40.214768 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:26:40.214789 | orchestrator | Tuesday 23 September 2025 07:26:39 +0000 (0:00:11.569) 0:00:11.808 ***** 2025-09-23 07:26:40.214808 | orchestrator | =============================================================================== 2025-09-23 07:26:40.214828 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.57s 2025-09-23 07:26:40.525041 | orchestrator | + osism apply hddtemp 2025-09-23 07:26:52.591267 | orchestrator | 2025-09-23 07:26:52 | INFO  | Task 31eb951a-a7c4-4c7e-87b1-fd0f27d8759e (hddtemp) was prepared for execution. 2025-09-23 07:26:52.591381 | orchestrator | 2025-09-23 07:26:52 | INFO  | It takes a moment until task 31eb951a-a7c4-4c7e-87b1-fd0f27d8759e (hddtemp) has been started and output is visible here. 2025-09-23 07:27:20.619324 | orchestrator | 2025-09-23 07:27:20.619434 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-09-23 07:27:20.619450 | orchestrator | 2025-09-23 07:27:20.619461 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-09-23 07:27:20.619472 | orchestrator | Tuesday 23 September 2025 07:26:56 +0000 (0:00:00.272) 0:00:00.272 ***** 2025-09-23 07:27:20.619482 | orchestrator | ok: [testbed-manager] 2025-09-23 07:27:20.619494 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:27:20.619504 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:27:20.619536 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:27:20.619546 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:27:20.619556 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:27:20.619565 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:27:20.619574 | orchestrator | 2025-09-23 07:27:20.619584 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-09-23 07:27:20.619594 | orchestrator | Tuesday 23 September 2025 07:26:57 +0000 (0:00:00.729) 0:00:01.001 ***** 2025-09-23 07:27:20.619619 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:27:20.619632 | orchestrator | 2025-09-23 07:27:20.619641 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-09-23 07:27:20.619651 | orchestrator | Tuesday 23 September 2025 07:26:58 +0000 (0:00:01.205) 0:00:02.207 ***** 2025-09-23 07:27:20.619660 | orchestrator | ok: [testbed-manager] 2025-09-23 07:27:20.619669 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:27:20.619679 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:27:20.619688 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:27:20.619697 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:27:20.619706 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:27:20.619716 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:27:20.619725 | orchestrator | 2025-09-23 07:27:20.619734 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-09-23 07:27:20.619744 | orchestrator | Tuesday 23 September 2025 07:27:00 +0000 (0:00:02.061) 0:00:04.269 ***** 2025-09-23 07:27:20.619753 | orchestrator | changed: [testbed-manager] 2025-09-23 07:27:20.619763 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:27:20.619773 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:27:20.619782 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:27:20.619791 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:27:20.619800 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:27:20.619809 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:27:20.619819 | orchestrator | 2025-09-23 07:27:20.619829 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-09-23 07:27:20.619840 | orchestrator | Tuesday 23 September 2025 07:27:01 +0000 (0:00:01.162) 0:00:05.432 ***** 2025-09-23 07:27:20.619851 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:27:20.619861 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:27:20.619873 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:27:20.619884 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:27:20.619894 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:27:20.619904 | orchestrator | ok: [testbed-manager] 2025-09-23 07:27:20.619915 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:27:20.619926 | orchestrator | 2025-09-23 07:27:20.619936 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-09-23 07:27:20.619947 | orchestrator | Tuesday 23 September 2025 07:27:03 +0000 (0:00:01.189) 0:00:06.621 ***** 2025-09-23 07:27:20.619957 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:27:20.619969 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:27:20.619980 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:27:20.619990 | orchestrator | changed: [testbed-manager] 2025-09-23 07:27:20.620001 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:27:20.620012 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:27:20.620022 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:27:20.620033 | orchestrator | 2025-09-23 07:27:20.620044 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-09-23 07:27:20.620055 | orchestrator | Tuesday 23 September 2025 07:27:03 +0000 (0:00:00.815) 0:00:07.437 ***** 2025-09-23 07:27:20.620065 | orchestrator | changed: [testbed-manager] 2025-09-23 07:27:20.620076 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:27:20.620087 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:27:20.620098 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:27:20.620116 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:27:20.620127 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:27:20.620138 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:27:20.620149 | orchestrator | 2025-09-23 07:27:20.620160 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-09-23 07:27:20.620171 | orchestrator | Tuesday 23 September 2025 07:27:16 +0000 (0:00:12.538) 0:00:19.976 ***** 2025-09-23 07:27:20.620182 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:27:20.620193 | orchestrator | 2025-09-23 07:27:20.620229 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-09-23 07:27:20.620240 | orchestrator | Tuesday 23 September 2025 07:27:17 +0000 (0:00:01.219) 0:00:21.195 ***** 2025-09-23 07:27:20.620249 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:27:20.620259 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:27:20.620268 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:27:20.620277 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:27:20.620286 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:27:20.620296 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:27:20.620305 | orchestrator | changed: [testbed-manager] 2025-09-23 07:27:20.620314 | orchestrator | 2025-09-23 07:27:20.620324 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:27:20.620333 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:27:20.620362 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-23 07:27:20.620372 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-23 07:27:20.620382 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-23 07:27:20.620391 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-23 07:27:20.620401 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-23 07:27:20.620415 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-23 07:27:20.620425 | orchestrator | 2025-09-23 07:27:20.620435 | orchestrator | 2025-09-23 07:27:20.620444 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:27:20.620454 | orchestrator | Tuesday 23 September 2025 07:27:20 +0000 (0:00:02.609) 0:00:23.805 ***** 2025-09-23 07:27:20.620464 | orchestrator | =============================================================================== 2025-09-23 07:27:20.620473 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.54s 2025-09-23 07:27:20.620483 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.61s 2025-09-23 07:27:20.620492 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.06s 2025-09-23 07:27:20.620501 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.22s 2025-09-23 07:27:20.620511 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.21s 2025-09-23 07:27:20.620520 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.19s 2025-09-23 07:27:20.620529 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.16s 2025-09-23 07:27:20.620539 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.82s 2025-09-23 07:27:20.620555 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.73s 2025-09-23 07:27:20.899281 | orchestrator | ++ semver latest 7.1.1 2025-09-23 07:27:20.953703 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-23 07:27:20.953762 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-23 07:27:20.953769 | orchestrator | + sudo systemctl restart manager.service 2025-09-23 07:27:37.574141 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-23 07:27:37.574310 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-23 07:27:37.574329 | orchestrator | + local max_attempts=60 2025-09-23 07:27:37.574341 | orchestrator | + local name=ceph-ansible 2025-09-23 07:27:37.574353 | orchestrator | + local attempt_num=1 2025-09-23 07:27:37.574364 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-23 07:27:37.609658 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-23 07:27:37.609741 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-23 07:27:37.609756 | orchestrator | + sleep 5 2025-09-23 07:27:42.615900 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-23 07:27:42.666401 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-23 07:27:42.666496 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-23 07:27:42.666511 | orchestrator | + sleep 5 2025-09-23 07:27:47.669508 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-23 07:27:47.701480 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-23 07:27:47.701552 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-23 07:27:47.701562 | orchestrator | + sleep 5 2025-09-23 07:27:52.704774 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-23 07:27:52.738462 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-23 07:27:52.738534 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-23 07:27:52.738543 | orchestrator | + sleep 5 2025-09-23 07:27:57.742825 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-23 07:27:57.783268 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-23 07:27:57.783355 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-23 07:27:57.783369 | orchestrator | + sleep 5 2025-09-23 07:28:02.788457 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-23 07:28:02.825692 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-23 07:28:02.825793 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-23 07:28:02.825820 | orchestrator | + sleep 5 2025-09-23 07:28:07.830532 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-23 07:28:07.866335 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-23 07:28:07.866417 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-23 07:28:07.866429 | orchestrator | + sleep 5 2025-09-23 07:28:12.870993 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-23 07:28:12.929140 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-23 07:28:12.929244 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-23 07:28:12.929258 | orchestrator | + sleep 5 2025-09-23 07:28:17.932126 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-23 07:28:17.967096 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-23 07:28:17.967216 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-23 07:28:17.967236 | orchestrator | + sleep 5 2025-09-23 07:28:22.971428 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-23 07:28:23.008907 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-23 07:28:23.009010 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-23 07:28:23.009031 | orchestrator | + sleep 5 2025-09-23 07:28:28.013635 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-23 07:28:28.048973 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-23 07:28:28.049065 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-23 07:28:28.049081 | orchestrator | + sleep 5 2025-09-23 07:28:33.053022 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-23 07:28:33.088091 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-23 07:28:33.088200 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-23 07:28:33.088216 | orchestrator | + sleep 5 2025-09-23 07:28:38.093247 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-23 07:28:38.131165 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-23 07:28:38.131295 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-23 07:28:38.131310 | orchestrator | + sleep 5 2025-09-23 07:28:43.137667 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-23 07:28:43.172543 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-23 07:28:43.172656 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-23 07:28:43.172679 | orchestrator | + local max_attempts=60 2025-09-23 07:28:43.172697 | orchestrator | + local name=kolla-ansible 2025-09-23 07:28:43.172712 | orchestrator | + local attempt_num=1 2025-09-23 07:28:43.173817 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-23 07:28:43.212236 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-23 07:28:43.212339 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-23 07:28:43.212355 | orchestrator | + local max_attempts=60 2025-09-23 07:28:43.212367 | orchestrator | + local name=osism-ansible 2025-09-23 07:28:43.212378 | orchestrator | + local attempt_num=1 2025-09-23 07:28:43.212849 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-23 07:28:43.247813 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-23 07:28:43.247910 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-23 07:28:43.247933 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-23 07:28:43.422350 | orchestrator | ARA in ceph-ansible already disabled. 2025-09-23 07:28:43.594793 | orchestrator | ARA in kolla-ansible already disabled. 2025-09-23 07:28:43.767289 | orchestrator | ARA in osism-ansible already disabled. 2025-09-23 07:28:43.936877 | orchestrator | ARA in osism-kubernetes already disabled. 2025-09-23 07:28:43.937085 | orchestrator | + osism apply gather-facts 2025-09-23 07:28:56.040455 | orchestrator | 2025-09-23 07:28:56 | INFO  | Task c91e7486-e1b5-4159-9c15-487a82cb0938 (gather-facts) was prepared for execution. 2025-09-23 07:28:56.040546 | orchestrator | 2025-09-23 07:28:56 | INFO  | It takes a moment until task c91e7486-e1b5-4159-9c15-487a82cb0938 (gather-facts) has been started and output is visible here. 2025-09-23 07:29:09.764836 | orchestrator | 2025-09-23 07:29:09.764949 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-23 07:29:09.764965 | orchestrator | 2025-09-23 07:29:09.764977 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-23 07:29:09.764990 | orchestrator | Tuesday 23 September 2025 07:28:59 +0000 (0:00:00.221) 0:00:00.221 ***** 2025-09-23 07:29:09.765002 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:29:09.765014 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:29:09.765025 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:29:09.765036 | orchestrator | ok: [testbed-manager] 2025-09-23 07:29:09.765046 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:29:09.765057 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:29:09.765068 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:29:09.765079 | orchestrator | 2025-09-23 07:29:09.765090 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-23 07:29:09.765101 | orchestrator | 2025-09-23 07:29:09.765112 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-23 07:29:09.765123 | orchestrator | Tuesday 23 September 2025 07:29:08 +0000 (0:00:08.930) 0:00:09.152 ***** 2025-09-23 07:29:09.765134 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:29:09.765147 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:29:09.765158 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:29:09.765168 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:29:09.765203 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:29:09.765214 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:29:09.765225 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:29:09.765236 | orchestrator | 2025-09-23 07:29:09.765247 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:29:09.765259 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-23 07:29:09.765271 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-23 07:29:09.765308 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-23 07:29:09.765319 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-23 07:29:09.765330 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-23 07:29:09.765342 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-23 07:29:09.765352 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-23 07:29:09.765363 | orchestrator | 2025-09-23 07:29:09.765374 | orchestrator | 2025-09-23 07:29:09.765388 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:29:09.765401 | orchestrator | Tuesday 23 September 2025 07:29:09 +0000 (0:00:00.574) 0:00:09.727 ***** 2025-09-23 07:29:09.765415 | orchestrator | =============================================================================== 2025-09-23 07:29:09.765427 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.93s 2025-09-23 07:29:09.765440 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.57s 2025-09-23 07:29:10.063440 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-09-23 07:29:10.082777 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-09-23 07:29:10.097627 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-09-23 07:29:10.115889 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-09-23 07:29:10.130591 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-09-23 07:29:10.144022 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-09-23 07:29:10.156459 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-09-23 07:29:10.168968 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-09-23 07:29:10.181433 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-09-23 07:29:10.200333 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-09-23 07:29:10.211415 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-09-23 07:29:10.222008 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-09-23 07:29:10.233653 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-09-23 07:29:10.245423 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-09-23 07:29:10.256796 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-09-23 07:29:10.268618 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-09-23 07:29:10.286312 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-09-23 07:29:10.298973 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-09-23 07:29:10.311303 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-09-23 07:29:10.322955 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-09-23 07:29:10.340326 | orchestrator | + [[ false == \t\r\u\e ]] 2025-09-23 07:29:10.586216 | orchestrator | ok: Runtime: 0:23:18.094942 2025-09-23 07:29:10.686561 | 2025-09-23 07:29:10.686669 | TASK [Deploy services] 2025-09-23 07:29:11.216756 | orchestrator | skipping: Conditional result was False 2025-09-23 07:29:11.234712 | 2025-09-23 07:29:11.234890 | TASK [Deploy in a nutshell] 2025-09-23 07:29:11.935406 | orchestrator | + set -e 2025-09-23 07:29:11.936772 | orchestrator | 2025-09-23 07:29:11.936810 | orchestrator | # PULL IMAGES 2025-09-23 07:29:11.936824 | orchestrator | 2025-09-23 07:29:11.936845 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-23 07:29:11.936866 | orchestrator | ++ export INTERACTIVE=false 2025-09-23 07:29:11.936881 | orchestrator | ++ INTERACTIVE=false 2025-09-23 07:29:11.936932 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-23 07:29:11.936955 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-23 07:29:11.936969 | orchestrator | + source /opt/manager-vars.sh 2025-09-23 07:29:11.936981 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-23 07:29:11.937000 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-23 07:29:11.937011 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-23 07:29:11.937037 | orchestrator | ++ CEPH_VERSION=reef 2025-09-23 07:29:11.937056 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-23 07:29:11.937083 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-23 07:29:11.937103 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-23 07:29:11.937120 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-23 07:29:11.937131 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-23 07:29:11.937148 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-23 07:29:11.937159 | orchestrator | ++ export ARA=false 2025-09-23 07:29:11.937207 | orchestrator | ++ ARA=false 2025-09-23 07:29:11.937225 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-23 07:29:11.937244 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-23 07:29:11.937261 | orchestrator | ++ export TEMPEST=false 2025-09-23 07:29:11.937280 | orchestrator | ++ TEMPEST=false 2025-09-23 07:29:11.937299 | orchestrator | ++ export IS_ZUUL=true 2025-09-23 07:29:11.937315 | orchestrator | ++ IS_ZUUL=true 2025-09-23 07:29:11.937326 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2025-09-23 07:29:11.937338 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2025-09-23 07:29:11.937349 | orchestrator | ++ export EXTERNAL_API=false 2025-09-23 07:29:11.937359 | orchestrator | ++ EXTERNAL_API=false 2025-09-23 07:29:11.937370 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-23 07:29:11.937381 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-23 07:29:11.937393 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-23 07:29:11.937403 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-23 07:29:11.937414 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-23 07:29:11.937425 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-23 07:29:11.937436 | orchestrator | + echo 2025-09-23 07:29:11.937458 | orchestrator | + echo '# PULL IMAGES' 2025-09-23 07:29:11.937469 | orchestrator | + echo 2025-09-23 07:29:11.937488 | orchestrator | ++ semver latest 7.0.0 2025-09-23 07:29:12.006322 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-23 07:29:12.006459 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-23 07:29:12.006476 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-09-23 07:29:13.882300 | orchestrator | 2025-09-23 07:29:13 | INFO  | Trying to run play pull-images in environment custom 2025-09-23 07:29:24.077879 | orchestrator | 2025-09-23 07:29:24 | INFO  | Task 7d0c85e2-85ff-4d46-9c8f-805f2326de46 (pull-images) was prepared for execution. 2025-09-23 07:29:24.077990 | orchestrator | 2025-09-23 07:29:24 | INFO  | Task 7d0c85e2-85ff-4d46-9c8f-805f2326de46 is running in background. No more output. Check ARA for logs. 2025-09-23 07:29:26.184094 | orchestrator | 2025-09-23 07:29:26 | INFO  | Trying to run play wipe-partitions in environment custom 2025-09-23 07:29:36.398600 | orchestrator | 2025-09-23 07:29:36 | INFO  | Task e4800906-accd-44a6-aeb4-ab1c39d31b7d (wipe-partitions) was prepared for execution. 2025-09-23 07:29:36.398716 | orchestrator | 2025-09-23 07:29:36 | INFO  | It takes a moment until task e4800906-accd-44a6-aeb4-ab1c39d31b7d (wipe-partitions) has been started and output is visible here. 2025-09-23 07:29:48.853632 | orchestrator | 2025-09-23 07:29:48.853749 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-09-23 07:29:48.853775 | orchestrator | 2025-09-23 07:29:48.853798 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-09-23 07:29:48.853830 | orchestrator | Tuesday 23 September 2025 07:29:40 +0000 (0:00:00.139) 0:00:00.139 ***** 2025-09-23 07:29:48.853850 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:29:48.853869 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:29:48.853882 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:29:48.853893 | orchestrator | 2025-09-23 07:29:48.853904 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-09-23 07:29:48.853943 | orchestrator | Tuesday 23 September 2025 07:29:41 +0000 (0:00:00.652) 0:00:00.791 ***** 2025-09-23 07:29:48.853955 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:29:48.853965 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:29:48.853981 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:29:48.853992 | orchestrator | 2025-09-23 07:29:48.854003 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-09-23 07:29:48.854079 | orchestrator | Tuesday 23 September 2025 07:29:41 +0000 (0:00:00.243) 0:00:01.035 ***** 2025-09-23 07:29:48.854093 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:29:48.854104 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:29:48.854115 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:29:48.854126 | orchestrator | 2025-09-23 07:29:48.854137 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-09-23 07:29:48.854148 | orchestrator | Tuesday 23 September 2025 07:29:42 +0000 (0:00:00.740) 0:00:01.775 ***** 2025-09-23 07:29:48.854198 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:29:48.854211 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:29:48.854223 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:29:48.854235 | orchestrator | 2025-09-23 07:29:48.854247 | orchestrator | TASK [Check device availability] *********************************************** 2025-09-23 07:29:48.854287 | orchestrator | Tuesday 23 September 2025 07:29:42 +0000 (0:00:00.270) 0:00:02.046 ***** 2025-09-23 07:29:48.854312 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-23 07:29:48.854329 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-23 07:29:48.854340 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-23 07:29:48.854351 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-23 07:29:48.854361 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-23 07:29:48.854372 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-23 07:29:48.854383 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-23 07:29:48.854393 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-23 07:29:48.854404 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-23 07:29:48.854414 | orchestrator | 2025-09-23 07:29:48.854425 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-09-23 07:29:48.854437 | orchestrator | Tuesday 23 September 2025 07:29:43 +0000 (0:00:01.171) 0:00:03.217 ***** 2025-09-23 07:29:48.854448 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-09-23 07:29:48.854459 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-09-23 07:29:48.854469 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-09-23 07:29:48.854480 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-09-23 07:29:48.854491 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-09-23 07:29:48.854501 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-09-23 07:29:48.854512 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-09-23 07:29:48.854522 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-09-23 07:29:48.854533 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-09-23 07:29:48.854543 | orchestrator | 2025-09-23 07:29:48.854554 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-09-23 07:29:48.854565 | orchestrator | Tuesday 23 September 2025 07:29:44 +0000 (0:00:01.332) 0:00:04.550 ***** 2025-09-23 07:29:48.854575 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-23 07:29:48.854586 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-23 07:29:48.854596 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-23 07:29:48.854607 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-23 07:29:48.854618 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-23 07:29:48.854628 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-23 07:29:48.854639 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-23 07:29:48.854661 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-23 07:29:48.854679 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-23 07:29:48.854690 | orchestrator | 2025-09-23 07:29:48.854701 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-09-23 07:29:48.854712 | orchestrator | Tuesday 23 September 2025 07:29:47 +0000 (0:00:02.296) 0:00:06.846 ***** 2025-09-23 07:29:48.854723 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:29:48.854733 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:29:48.854744 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:29:48.854754 | orchestrator | 2025-09-23 07:29:48.854765 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-09-23 07:29:48.854777 | orchestrator | Tuesday 23 September 2025 07:29:47 +0000 (0:00:00.605) 0:00:07.451 ***** 2025-09-23 07:29:48.854797 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:29:48.854818 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:29:48.854839 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:29:48.854860 | orchestrator | 2025-09-23 07:29:48.854883 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:29:48.854905 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 07:29:48.854922 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 07:29:48.854953 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 07:29:48.854964 | orchestrator | 2025-09-23 07:29:48.854975 | orchestrator | 2025-09-23 07:29:48.854985 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:29:48.854996 | orchestrator | Tuesday 23 September 2025 07:29:48 +0000 (0:00:00.627) 0:00:08.079 ***** 2025-09-23 07:29:48.855007 | orchestrator | =============================================================================== 2025-09-23 07:29:48.855017 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.30s 2025-09-23 07:29:48.855028 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.33s 2025-09-23 07:29:48.855039 | orchestrator | Check device availability ----------------------------------------------- 1.17s 2025-09-23 07:29:48.855050 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.74s 2025-09-23 07:29:48.855060 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.65s 2025-09-23 07:29:48.855071 | orchestrator | Request device events from the kernel ----------------------------------- 0.63s 2025-09-23 07:29:48.855081 | orchestrator | Reload udev rules ------------------------------------------------------- 0.61s 2025-09-23 07:29:48.855092 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.27s 2025-09-23 07:29:48.855103 | orchestrator | Remove all rook related logical devices --------------------------------- 0.24s 2025-09-23 07:30:01.134759 | orchestrator | 2025-09-23 07:30:01 | INFO  | Task 9b9d6622-74a1-4368-9b4a-1fffbe025b75 (facts) was prepared for execution. 2025-09-23 07:30:01.134859 | orchestrator | 2025-09-23 07:30:01 | INFO  | It takes a moment until task 9b9d6622-74a1-4368-9b4a-1fffbe025b75 (facts) has been started and output is visible here. 2025-09-23 07:30:13.136833 | orchestrator | 2025-09-23 07:30:13.136923 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-23 07:30:13.136940 | orchestrator | 2025-09-23 07:30:13.136951 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-23 07:30:13.136963 | orchestrator | Tuesday 23 September 2025 07:30:05 +0000 (0:00:00.271) 0:00:00.271 ***** 2025-09-23 07:30:13.136973 | orchestrator | ok: [testbed-manager] 2025-09-23 07:30:13.136985 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:30:13.136996 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:30:13.137029 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:30:13.137040 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:30:13.137050 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:30:13.137061 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:30:13.137072 | orchestrator | 2025-09-23 07:30:13.137082 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-23 07:30:13.137093 | orchestrator | Tuesday 23 September 2025 07:30:06 +0000 (0:00:00.993) 0:00:01.265 ***** 2025-09-23 07:30:13.137103 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:30:13.137114 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:30:13.137125 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:30:13.137135 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:30:13.137146 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:30:13.137189 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:30:13.137204 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:30:13.137223 | orchestrator | 2025-09-23 07:30:13.137246 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-23 07:30:13.137271 | orchestrator | 2025-09-23 07:30:13.137305 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-23 07:30:13.137322 | orchestrator | Tuesday 23 September 2025 07:30:07 +0000 (0:00:01.129) 0:00:02.395 ***** 2025-09-23 07:30:13.137339 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:30:13.137357 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:30:13.137375 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:30:13.137395 | orchestrator | ok: [testbed-manager] 2025-09-23 07:30:13.137415 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:30:13.137434 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:30:13.137452 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:30:13.137464 | orchestrator | 2025-09-23 07:30:13.137478 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-23 07:30:13.137490 | orchestrator | 2025-09-23 07:30:13.137502 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-23 07:30:13.137514 | orchestrator | Tuesday 23 September 2025 07:30:12 +0000 (0:00:04.798) 0:00:07.193 ***** 2025-09-23 07:30:13.137526 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:30:13.137539 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:30:13.137552 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:30:13.137563 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:30:13.137575 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:30:13.137587 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:30:13.137599 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:30:13.137610 | orchestrator | 2025-09-23 07:30:13.137623 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:30:13.137635 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 07:30:13.137648 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 07:30:13.137660 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 07:30:13.137672 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 07:30:13.137685 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 07:30:13.137697 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 07:30:13.137709 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 07:30:13.137722 | orchestrator | 2025-09-23 07:30:13.137744 | orchestrator | 2025-09-23 07:30:13.137756 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:30:13.137766 | orchestrator | Tuesday 23 September 2025 07:30:12 +0000 (0:00:00.606) 0:00:07.800 ***** 2025-09-23 07:30:13.137777 | orchestrator | =============================================================================== 2025-09-23 07:30:13.137787 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.80s 2025-09-23 07:30:13.137797 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.13s 2025-09-23 07:30:13.137808 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.99s 2025-09-23 07:30:13.137819 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.61s 2025-09-23 07:30:15.342300 | orchestrator | 2025-09-23 07:30:15 | INFO  | Task d34e4027-8383-436d-9efc-cebf28ad0b29 (ceph-configure-lvm-volumes) was prepared for execution. 2025-09-23 07:30:15.342396 | orchestrator | 2025-09-23 07:30:15 | INFO  | It takes a moment until task d34e4027-8383-436d-9efc-cebf28ad0b29 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-09-23 07:30:27.583005 | orchestrator | 2025-09-23 07:30:27.583097 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-23 07:30:27.583105 | orchestrator | 2025-09-23 07:30:27.583110 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-23 07:30:27.583114 | orchestrator | Tuesday 23 September 2025 07:30:19 +0000 (0:00:00.368) 0:00:00.368 ***** 2025-09-23 07:30:27.583119 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-23 07:30:27.583123 | orchestrator | 2025-09-23 07:30:27.583127 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-23 07:30:27.583131 | orchestrator | Tuesday 23 September 2025 07:30:19 +0000 (0:00:00.267) 0:00:00.636 ***** 2025-09-23 07:30:27.583135 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:30:27.583140 | orchestrator | 2025-09-23 07:30:27.583144 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:27.583147 | orchestrator | Tuesday 23 September 2025 07:30:20 +0000 (0:00:00.262) 0:00:00.899 ***** 2025-09-23 07:30:27.583183 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-23 07:30:27.583188 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-23 07:30:27.583192 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-23 07:30:27.583211 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-23 07:30:27.583215 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-23 07:30:27.583219 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-23 07:30:27.583223 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-23 07:30:27.583226 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-23 07:30:27.583230 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-23 07:30:27.583234 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-23 07:30:27.583238 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-23 07:30:27.583242 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-23 07:30:27.583245 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-23 07:30:27.583249 | orchestrator | 2025-09-23 07:30:27.583253 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:27.583256 | orchestrator | Tuesday 23 September 2025 07:30:20 +0000 (0:00:00.358) 0:00:01.258 ***** 2025-09-23 07:30:27.583260 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:30:27.583279 | orchestrator | 2025-09-23 07:30:27.583283 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:27.583287 | orchestrator | Tuesday 23 September 2025 07:30:21 +0000 (0:00:00.517) 0:00:01.775 ***** 2025-09-23 07:30:27.583291 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:30:27.583294 | orchestrator | 2025-09-23 07:30:27.583298 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:27.583302 | orchestrator | Tuesday 23 September 2025 07:30:21 +0000 (0:00:00.203) 0:00:01.978 ***** 2025-09-23 07:30:27.583306 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:30:27.583309 | orchestrator | 2025-09-23 07:30:27.583313 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:27.583317 | orchestrator | Tuesday 23 September 2025 07:30:21 +0000 (0:00:00.214) 0:00:02.192 ***** 2025-09-23 07:30:27.583321 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:30:27.583327 | orchestrator | 2025-09-23 07:30:27.583331 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:27.583335 | orchestrator | Tuesday 23 September 2025 07:30:21 +0000 (0:00:00.220) 0:00:02.413 ***** 2025-09-23 07:30:27.583339 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:30:27.583343 | orchestrator | 2025-09-23 07:30:27.583347 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:27.583351 | orchestrator | Tuesday 23 September 2025 07:30:21 +0000 (0:00:00.256) 0:00:02.669 ***** 2025-09-23 07:30:27.583355 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:30:27.583358 | orchestrator | 2025-09-23 07:30:27.583362 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:27.583366 | orchestrator | Tuesday 23 September 2025 07:30:22 +0000 (0:00:00.208) 0:00:02.878 ***** 2025-09-23 07:30:27.583370 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:30:27.583373 | orchestrator | 2025-09-23 07:30:27.583377 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:27.583381 | orchestrator | Tuesday 23 September 2025 07:30:22 +0000 (0:00:00.219) 0:00:03.097 ***** 2025-09-23 07:30:27.583385 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:30:27.583388 | orchestrator | 2025-09-23 07:30:27.583392 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:27.583396 | orchestrator | Tuesday 23 September 2025 07:30:22 +0000 (0:00:00.208) 0:00:03.305 ***** 2025-09-23 07:30:27.583400 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604) 2025-09-23 07:30:27.583405 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604) 2025-09-23 07:30:27.583409 | orchestrator | 2025-09-23 07:30:27.583413 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:27.583416 | orchestrator | Tuesday 23 September 2025 07:30:23 +0000 (0:00:00.405) 0:00:03.711 ***** 2025-09-23 07:30:27.583431 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c90ab8a7-6741-4b53-9264-08db4b9d41dd) 2025-09-23 07:30:27.583435 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c90ab8a7-6741-4b53-9264-08db4b9d41dd) 2025-09-23 07:30:27.583439 | orchestrator | 2025-09-23 07:30:27.583443 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:27.583446 | orchestrator | Tuesday 23 September 2025 07:30:23 +0000 (0:00:00.398) 0:00:04.109 ***** 2025-09-23 07:30:27.583453 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_59088487-bcaf-4b18-9006-b2b85c395676) 2025-09-23 07:30:27.583457 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_59088487-bcaf-4b18-9006-b2b85c395676) 2025-09-23 07:30:27.583461 | orchestrator | 2025-09-23 07:30:27.583464 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:27.583468 | orchestrator | Tuesday 23 September 2025 07:30:24 +0000 (0:00:00.634) 0:00:04.744 ***** 2025-09-23 07:30:27.583472 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7c71f819-4704-4446-9599-7b21db8e3013) 2025-09-23 07:30:27.583479 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7c71f819-4704-4446-9599-7b21db8e3013) 2025-09-23 07:30:27.583483 | orchestrator | 2025-09-23 07:30:27.583486 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:27.583490 | orchestrator | Tuesday 23 September 2025 07:30:24 +0000 (0:00:00.648) 0:00:05.392 ***** 2025-09-23 07:30:27.583494 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-23 07:30:27.583498 | orchestrator | 2025-09-23 07:30:27.583501 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:27.583505 | orchestrator | Tuesday 23 September 2025 07:30:25 +0000 (0:00:00.764) 0:00:06.157 ***** 2025-09-23 07:30:27.583509 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-23 07:30:27.583513 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-23 07:30:27.583516 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-23 07:30:27.583520 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-23 07:30:27.583524 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-23 07:30:27.583527 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-23 07:30:27.583531 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-23 07:30:27.583535 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-23 07:30:27.583539 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-23 07:30:27.583542 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-23 07:30:27.583546 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-23 07:30:27.583550 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-23 07:30:27.583554 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-23 07:30:27.583557 | orchestrator | 2025-09-23 07:30:27.583561 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:27.583565 | orchestrator | Tuesday 23 September 2025 07:30:25 +0000 (0:00:00.422) 0:00:06.580 ***** 2025-09-23 07:30:27.583569 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:30:27.583572 | orchestrator | 2025-09-23 07:30:27.583576 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:27.583581 | orchestrator | Tuesday 23 September 2025 07:30:26 +0000 (0:00:00.208) 0:00:06.788 ***** 2025-09-23 07:30:27.583585 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:30:27.583589 | orchestrator | 2025-09-23 07:30:27.583594 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:27.583598 | orchestrator | Tuesday 23 September 2025 07:30:26 +0000 (0:00:00.231) 0:00:07.019 ***** 2025-09-23 07:30:27.583602 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:30:27.583606 | orchestrator | 2025-09-23 07:30:27.583610 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:27.583615 | orchestrator | Tuesday 23 September 2025 07:30:26 +0000 (0:00:00.210) 0:00:07.230 ***** 2025-09-23 07:30:27.583619 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:30:27.583623 | orchestrator | 2025-09-23 07:30:27.583627 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:27.583632 | orchestrator | Tuesday 23 September 2025 07:30:26 +0000 (0:00:00.192) 0:00:07.423 ***** 2025-09-23 07:30:27.583636 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:30:27.583640 | orchestrator | 2025-09-23 07:30:27.583647 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:27.583651 | orchestrator | Tuesday 23 September 2025 07:30:26 +0000 (0:00:00.234) 0:00:07.657 ***** 2025-09-23 07:30:27.583655 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:30:27.583660 | orchestrator | 2025-09-23 07:30:27.583664 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:27.583669 | orchestrator | Tuesday 23 September 2025 07:30:27 +0000 (0:00:00.188) 0:00:07.846 ***** 2025-09-23 07:30:27.583673 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:30:27.583677 | orchestrator | 2025-09-23 07:30:27.583682 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:27.583686 | orchestrator | Tuesday 23 September 2025 07:30:27 +0000 (0:00:00.225) 0:00:08.072 ***** 2025-09-23 07:30:27.583693 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:30:35.508554 | orchestrator | 2025-09-23 07:30:35.508659 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:35.508672 | orchestrator | Tuesday 23 September 2025 07:30:27 +0000 (0:00:00.197) 0:00:08.269 ***** 2025-09-23 07:30:35.508680 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-23 07:30:35.508690 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-23 07:30:35.508698 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-23 07:30:35.508706 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-23 07:30:35.508714 | orchestrator | 2025-09-23 07:30:35.508722 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:35.508730 | orchestrator | Tuesday 23 September 2025 07:30:28 +0000 (0:00:01.065) 0:00:09.334 ***** 2025-09-23 07:30:35.508753 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:30:35.508761 | orchestrator | 2025-09-23 07:30:35.508768 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:35.508776 | orchestrator | Tuesday 23 September 2025 07:30:28 +0000 (0:00:00.206) 0:00:09.541 ***** 2025-09-23 07:30:35.508784 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:30:35.508791 | orchestrator | 2025-09-23 07:30:35.508799 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:35.508807 | orchestrator | Tuesday 23 September 2025 07:30:29 +0000 (0:00:00.191) 0:00:09.733 ***** 2025-09-23 07:30:35.508814 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:30:35.508822 | orchestrator | 2025-09-23 07:30:35.508829 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:35.508837 | orchestrator | Tuesday 23 September 2025 07:30:29 +0000 (0:00:00.205) 0:00:09.939 ***** 2025-09-23 07:30:35.508845 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:30:35.508852 | orchestrator | 2025-09-23 07:30:35.508860 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-23 07:30:35.508868 | orchestrator | Tuesday 23 September 2025 07:30:29 +0000 (0:00:00.202) 0:00:10.141 ***** 2025-09-23 07:30:35.508875 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-09-23 07:30:35.508883 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-09-23 07:30:35.508891 | orchestrator | 2025-09-23 07:30:35.508899 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-23 07:30:35.508906 | orchestrator | Tuesday 23 September 2025 07:30:29 +0000 (0:00:00.190) 0:00:10.332 ***** 2025-09-23 07:30:35.508914 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:30:35.508921 | orchestrator | 2025-09-23 07:30:35.508929 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-23 07:30:35.508937 | orchestrator | Tuesday 23 September 2025 07:30:29 +0000 (0:00:00.143) 0:00:10.476 ***** 2025-09-23 07:30:35.508944 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:30:35.508952 | orchestrator | 2025-09-23 07:30:35.508959 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-23 07:30:35.508967 | orchestrator | Tuesday 23 September 2025 07:30:29 +0000 (0:00:00.159) 0:00:10.636 ***** 2025-09-23 07:30:35.508975 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:30:35.509001 | orchestrator | 2025-09-23 07:30:35.509009 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-23 07:30:35.509016 | orchestrator | Tuesday 23 September 2025 07:30:30 +0000 (0:00:00.179) 0:00:10.815 ***** 2025-09-23 07:30:35.509024 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:30:35.509031 | orchestrator | 2025-09-23 07:30:35.509039 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-23 07:30:35.509046 | orchestrator | Tuesday 23 September 2025 07:30:30 +0000 (0:00:00.167) 0:00:10.982 ***** 2025-09-23 07:30:35.509054 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fa3e03eb-2d2a-5719-835a-39fedcc9009f'}}) 2025-09-23 07:30:35.509062 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0570cb7e-4d0f-57ea-8b12-da850e205fc7'}}) 2025-09-23 07:30:35.509070 | orchestrator | 2025-09-23 07:30:35.509077 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-23 07:30:35.509085 | orchestrator | Tuesday 23 September 2025 07:30:30 +0000 (0:00:00.182) 0:00:11.165 ***** 2025-09-23 07:30:35.509093 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fa3e03eb-2d2a-5719-835a-39fedcc9009f'}})  2025-09-23 07:30:35.509106 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0570cb7e-4d0f-57ea-8b12-da850e205fc7'}})  2025-09-23 07:30:35.509113 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:30:35.509121 | orchestrator | 2025-09-23 07:30:35.509129 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-23 07:30:35.509136 | orchestrator | Tuesday 23 September 2025 07:30:30 +0000 (0:00:00.157) 0:00:11.322 ***** 2025-09-23 07:30:35.509144 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fa3e03eb-2d2a-5719-835a-39fedcc9009f'}})  2025-09-23 07:30:35.509164 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0570cb7e-4d0f-57ea-8b12-da850e205fc7'}})  2025-09-23 07:30:35.509171 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:30:35.509178 | orchestrator | 2025-09-23 07:30:35.509185 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-23 07:30:35.509192 | orchestrator | Tuesday 23 September 2025 07:30:30 +0000 (0:00:00.368) 0:00:11.690 ***** 2025-09-23 07:30:35.509198 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fa3e03eb-2d2a-5719-835a-39fedcc9009f'}})  2025-09-23 07:30:35.509205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0570cb7e-4d0f-57ea-8b12-da850e205fc7'}})  2025-09-23 07:30:35.509212 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:30:35.509219 | orchestrator | 2025-09-23 07:30:35.509238 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-23 07:30:35.509245 | orchestrator | Tuesday 23 September 2025 07:30:31 +0000 (0:00:00.149) 0:00:11.840 ***** 2025-09-23 07:30:35.509251 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:30:35.509258 | orchestrator | 2025-09-23 07:30:35.509265 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-23 07:30:35.509272 | orchestrator | Tuesday 23 September 2025 07:30:31 +0000 (0:00:00.149) 0:00:11.990 ***** 2025-09-23 07:30:35.509278 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:30:35.509285 | orchestrator | 2025-09-23 07:30:35.509292 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-23 07:30:35.509298 | orchestrator | Tuesday 23 September 2025 07:30:31 +0000 (0:00:00.189) 0:00:12.179 ***** 2025-09-23 07:30:35.509305 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:30:35.509312 | orchestrator | 2025-09-23 07:30:35.509319 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-23 07:30:35.509325 | orchestrator | Tuesday 23 September 2025 07:30:31 +0000 (0:00:00.130) 0:00:12.310 ***** 2025-09-23 07:30:35.509332 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:30:35.509339 | orchestrator | 2025-09-23 07:30:35.509351 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-23 07:30:35.509358 | orchestrator | Tuesday 23 September 2025 07:30:31 +0000 (0:00:00.142) 0:00:12.452 ***** 2025-09-23 07:30:35.509365 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:30:35.509372 | orchestrator | 2025-09-23 07:30:35.509378 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-23 07:30:35.509385 | orchestrator | Tuesday 23 September 2025 07:30:31 +0000 (0:00:00.146) 0:00:12.599 ***** 2025-09-23 07:30:35.509392 | orchestrator | ok: [testbed-node-3] => { 2025-09-23 07:30:35.509399 | orchestrator |  "ceph_osd_devices": { 2025-09-23 07:30:35.509406 | orchestrator |  "sdb": { 2025-09-23 07:30:35.509413 | orchestrator |  "osd_lvm_uuid": "fa3e03eb-2d2a-5719-835a-39fedcc9009f" 2025-09-23 07:30:35.509420 | orchestrator |  }, 2025-09-23 07:30:35.509427 | orchestrator |  "sdc": { 2025-09-23 07:30:35.509433 | orchestrator |  "osd_lvm_uuid": "0570cb7e-4d0f-57ea-8b12-da850e205fc7" 2025-09-23 07:30:35.509440 | orchestrator |  } 2025-09-23 07:30:35.509447 | orchestrator |  } 2025-09-23 07:30:35.509454 | orchestrator | } 2025-09-23 07:30:35.509461 | orchestrator | 2025-09-23 07:30:35.509468 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-23 07:30:35.509475 | orchestrator | Tuesday 23 September 2025 07:30:32 +0000 (0:00:00.146) 0:00:12.745 ***** 2025-09-23 07:30:35.509481 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:30:35.509488 | orchestrator | 2025-09-23 07:30:35.509495 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-23 07:30:35.509501 | orchestrator | Tuesday 23 September 2025 07:30:32 +0000 (0:00:00.151) 0:00:12.897 ***** 2025-09-23 07:30:35.509511 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:30:35.509518 | orchestrator | 2025-09-23 07:30:35.509525 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-23 07:30:35.509532 | orchestrator | Tuesday 23 September 2025 07:30:32 +0000 (0:00:00.128) 0:00:13.025 ***** 2025-09-23 07:30:35.509539 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:30:35.509545 | orchestrator | 2025-09-23 07:30:35.509552 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-23 07:30:35.509559 | orchestrator | Tuesday 23 September 2025 07:30:32 +0000 (0:00:00.136) 0:00:13.161 ***** 2025-09-23 07:30:35.509566 | orchestrator | changed: [testbed-node-3] => { 2025-09-23 07:30:35.509572 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-23 07:30:35.509579 | orchestrator |  "ceph_osd_devices": { 2025-09-23 07:30:35.509585 | orchestrator |  "sdb": { 2025-09-23 07:30:35.509591 | orchestrator |  "osd_lvm_uuid": "fa3e03eb-2d2a-5719-835a-39fedcc9009f" 2025-09-23 07:30:35.509597 | orchestrator |  }, 2025-09-23 07:30:35.509604 | orchestrator |  "sdc": { 2025-09-23 07:30:35.509611 | orchestrator |  "osd_lvm_uuid": "0570cb7e-4d0f-57ea-8b12-da850e205fc7" 2025-09-23 07:30:35.509618 | orchestrator |  } 2025-09-23 07:30:35.509625 | orchestrator |  }, 2025-09-23 07:30:35.509632 | orchestrator |  "lvm_volumes": [ 2025-09-23 07:30:35.509638 | orchestrator |  { 2025-09-23 07:30:35.509645 | orchestrator |  "data": "osd-block-fa3e03eb-2d2a-5719-835a-39fedcc9009f", 2025-09-23 07:30:35.509652 | orchestrator |  "data_vg": "ceph-fa3e03eb-2d2a-5719-835a-39fedcc9009f" 2025-09-23 07:30:35.509659 | orchestrator |  }, 2025-09-23 07:30:35.509666 | orchestrator |  { 2025-09-23 07:30:35.509672 | orchestrator |  "data": "osd-block-0570cb7e-4d0f-57ea-8b12-da850e205fc7", 2025-09-23 07:30:35.509679 | orchestrator |  "data_vg": "ceph-0570cb7e-4d0f-57ea-8b12-da850e205fc7" 2025-09-23 07:30:35.509686 | orchestrator |  } 2025-09-23 07:30:35.509692 | orchestrator |  ] 2025-09-23 07:30:35.509699 | orchestrator |  } 2025-09-23 07:30:35.509706 | orchestrator | } 2025-09-23 07:30:35.509713 | orchestrator | 2025-09-23 07:30:35.509720 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-23 07:30:35.509731 | orchestrator | Tuesday 23 September 2025 07:30:32 +0000 (0:00:00.240) 0:00:13.401 ***** 2025-09-23 07:30:35.509738 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-23 07:30:35.509744 | orchestrator | 2025-09-23 07:30:35.509751 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-23 07:30:35.509758 | orchestrator | 2025-09-23 07:30:35.509765 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-23 07:30:35.509771 | orchestrator | Tuesday 23 September 2025 07:30:34 +0000 (0:00:02.294) 0:00:15.696 ***** 2025-09-23 07:30:35.509778 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-23 07:30:35.509785 | orchestrator | 2025-09-23 07:30:35.509792 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-23 07:30:35.509798 | orchestrator | Tuesday 23 September 2025 07:30:35 +0000 (0:00:00.270) 0:00:15.966 ***** 2025-09-23 07:30:35.509805 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:30:35.509812 | orchestrator | 2025-09-23 07:30:35.509819 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:35.509829 | orchestrator | Tuesday 23 September 2025 07:30:35 +0000 (0:00:00.230) 0:00:16.197 ***** 2025-09-23 07:30:43.570516 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-23 07:30:43.570623 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-23 07:30:43.570639 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-23 07:30:43.570650 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-23 07:30:43.570661 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-23 07:30:43.570672 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-23 07:30:43.570683 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-23 07:30:43.570693 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-23 07:30:43.570704 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-23 07:30:43.570715 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-23 07:30:43.570745 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-23 07:30:43.570757 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-23 07:30:43.570767 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-23 07:30:43.570782 | orchestrator | 2025-09-23 07:30:43.570794 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:43.570806 | orchestrator | Tuesday 23 September 2025 07:30:35 +0000 (0:00:00.462) 0:00:16.659 ***** 2025-09-23 07:30:43.570818 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:30:43.570830 | orchestrator | 2025-09-23 07:30:43.570841 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:43.570852 | orchestrator | Tuesday 23 September 2025 07:30:36 +0000 (0:00:00.185) 0:00:16.845 ***** 2025-09-23 07:30:43.570863 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:30:43.570874 | orchestrator | 2025-09-23 07:30:43.570884 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:43.570895 | orchestrator | Tuesday 23 September 2025 07:30:36 +0000 (0:00:00.207) 0:00:17.052 ***** 2025-09-23 07:30:43.570906 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:30:43.570916 | orchestrator | 2025-09-23 07:30:43.570927 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:43.570938 | orchestrator | Tuesday 23 September 2025 07:30:36 +0000 (0:00:00.195) 0:00:17.247 ***** 2025-09-23 07:30:43.570948 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:30:43.570981 | orchestrator | 2025-09-23 07:30:43.570993 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:43.571003 | orchestrator | Tuesday 23 September 2025 07:30:36 +0000 (0:00:00.196) 0:00:17.443 ***** 2025-09-23 07:30:43.571014 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:30:43.571025 | orchestrator | 2025-09-23 07:30:43.571035 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:43.571046 | orchestrator | Tuesday 23 September 2025 07:30:37 +0000 (0:00:00.604) 0:00:18.049 ***** 2025-09-23 07:30:43.571059 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:30:43.571071 | orchestrator | 2025-09-23 07:30:43.571083 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:43.571096 | orchestrator | Tuesday 23 September 2025 07:30:37 +0000 (0:00:00.210) 0:00:18.259 ***** 2025-09-23 07:30:43.571108 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:30:43.571120 | orchestrator | 2025-09-23 07:30:43.571133 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:43.571145 | orchestrator | Tuesday 23 September 2025 07:30:37 +0000 (0:00:00.208) 0:00:18.468 ***** 2025-09-23 07:30:43.571182 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:30:43.571193 | orchestrator | 2025-09-23 07:30:43.571203 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:43.571214 | orchestrator | Tuesday 23 September 2025 07:30:37 +0000 (0:00:00.212) 0:00:18.681 ***** 2025-09-23 07:30:43.571225 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7) 2025-09-23 07:30:43.571237 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7) 2025-09-23 07:30:43.571248 | orchestrator | 2025-09-23 07:30:43.571259 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:43.571269 | orchestrator | Tuesday 23 September 2025 07:30:38 +0000 (0:00:00.416) 0:00:19.097 ***** 2025-09-23 07:30:43.571280 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0bff4510-9eaf-4f53-bf1a-5cee4a2246ec) 2025-09-23 07:30:43.571290 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0bff4510-9eaf-4f53-bf1a-5cee4a2246ec) 2025-09-23 07:30:43.571301 | orchestrator | 2025-09-23 07:30:43.571311 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:43.571322 | orchestrator | Tuesday 23 September 2025 07:30:38 +0000 (0:00:00.401) 0:00:19.499 ***** 2025-09-23 07:30:43.571332 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_fd6a0863-0d42-4019-9e23-eb994da62dbd) 2025-09-23 07:30:43.571343 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_fd6a0863-0d42-4019-9e23-eb994da62dbd) 2025-09-23 07:30:43.571361 | orchestrator | 2025-09-23 07:30:43.571379 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:43.571407 | orchestrator | Tuesday 23 September 2025 07:30:39 +0000 (0:00:00.514) 0:00:20.013 ***** 2025-09-23 07:30:43.571446 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_87ebb364-ac90-40d8-a46a-ebfab3ab7b91) 2025-09-23 07:30:43.571466 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_87ebb364-ac90-40d8-a46a-ebfab3ab7b91) 2025-09-23 07:30:43.571485 | orchestrator | 2025-09-23 07:30:43.571503 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:43.571521 | orchestrator | Tuesday 23 September 2025 07:30:39 +0000 (0:00:00.423) 0:00:20.436 ***** 2025-09-23 07:30:43.571539 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-23 07:30:43.571551 | orchestrator | 2025-09-23 07:30:43.571561 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:43.571582 | orchestrator | Tuesday 23 September 2025 07:30:40 +0000 (0:00:00.420) 0:00:20.857 ***** 2025-09-23 07:30:43.571601 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-23 07:30:43.571632 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-23 07:30:43.571649 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-23 07:30:43.571666 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-23 07:30:43.571683 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-23 07:30:43.571700 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-23 07:30:43.571718 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-23 07:30:43.571737 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-23 07:30:43.571755 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-23 07:30:43.571773 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-23 07:30:43.571786 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-23 07:30:43.571796 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-23 07:30:43.571807 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-23 07:30:43.571818 | orchestrator | 2025-09-23 07:30:43.571828 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:43.571840 | orchestrator | Tuesday 23 September 2025 07:30:40 +0000 (0:00:00.393) 0:00:21.251 ***** 2025-09-23 07:30:43.571859 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:30:43.571887 | orchestrator | 2025-09-23 07:30:43.571906 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:43.571923 | orchestrator | Tuesday 23 September 2025 07:30:40 +0000 (0:00:00.207) 0:00:21.458 ***** 2025-09-23 07:30:43.571940 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:30:43.571959 | orchestrator | 2025-09-23 07:30:43.571976 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:43.571993 | orchestrator | Tuesday 23 September 2025 07:30:41 +0000 (0:00:00.680) 0:00:22.139 ***** 2025-09-23 07:30:43.572012 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:30:43.572031 | orchestrator | 2025-09-23 07:30:43.572048 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:43.572064 | orchestrator | Tuesday 23 September 2025 07:30:41 +0000 (0:00:00.204) 0:00:22.343 ***** 2025-09-23 07:30:43.572075 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:30:43.572085 | orchestrator | 2025-09-23 07:30:43.572096 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:43.572107 | orchestrator | Tuesday 23 September 2025 07:30:41 +0000 (0:00:00.205) 0:00:22.549 ***** 2025-09-23 07:30:43.572118 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:30:43.572128 | orchestrator | 2025-09-23 07:30:43.572139 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:43.572175 | orchestrator | Tuesday 23 September 2025 07:30:42 +0000 (0:00:00.209) 0:00:22.758 ***** 2025-09-23 07:30:43.572191 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:30:43.572202 | orchestrator | 2025-09-23 07:30:43.572213 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:43.572224 | orchestrator | Tuesday 23 September 2025 07:30:42 +0000 (0:00:00.203) 0:00:22.962 ***** 2025-09-23 07:30:43.572234 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:30:43.572245 | orchestrator | 2025-09-23 07:30:43.572255 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:43.572266 | orchestrator | Tuesday 23 September 2025 07:30:42 +0000 (0:00:00.198) 0:00:23.160 ***** 2025-09-23 07:30:43.572277 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:30:43.572287 | orchestrator | 2025-09-23 07:30:43.572298 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:43.572319 | orchestrator | Tuesday 23 September 2025 07:30:42 +0000 (0:00:00.194) 0:00:23.355 ***** 2025-09-23 07:30:43.572330 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-23 07:30:43.572342 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-23 07:30:43.572353 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-23 07:30:43.572364 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-23 07:30:43.572375 | orchestrator | 2025-09-23 07:30:43.572385 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:43.572396 | orchestrator | Tuesday 23 September 2025 07:30:43 +0000 (0:00:00.711) 0:00:24.066 ***** 2025-09-23 07:30:43.572407 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:30:43.572418 | orchestrator | 2025-09-23 07:30:43.572440 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:49.163623 | orchestrator | Tuesday 23 September 2025 07:30:43 +0000 (0:00:00.192) 0:00:24.259 ***** 2025-09-23 07:30:49.163711 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:30:49.163726 | orchestrator | 2025-09-23 07:30:49.163738 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:49.163749 | orchestrator | Tuesday 23 September 2025 07:30:43 +0000 (0:00:00.191) 0:00:24.451 ***** 2025-09-23 07:30:49.163759 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:30:49.163770 | orchestrator | 2025-09-23 07:30:49.163780 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:49.163791 | orchestrator | Tuesday 23 September 2025 07:30:43 +0000 (0:00:00.197) 0:00:24.648 ***** 2025-09-23 07:30:49.163801 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:30:49.163812 | orchestrator | 2025-09-23 07:30:49.163839 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-23 07:30:49.163851 | orchestrator | Tuesday 23 September 2025 07:30:44 +0000 (0:00:00.196) 0:00:24.845 ***** 2025-09-23 07:30:49.163861 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-09-23 07:30:49.163871 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-09-23 07:30:49.163882 | orchestrator | 2025-09-23 07:30:49.163892 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-23 07:30:49.163903 | orchestrator | Tuesday 23 September 2025 07:30:44 +0000 (0:00:00.357) 0:00:25.202 ***** 2025-09-23 07:30:49.163913 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:30:49.163924 | orchestrator | 2025-09-23 07:30:49.163934 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-23 07:30:49.163945 | orchestrator | Tuesday 23 September 2025 07:30:44 +0000 (0:00:00.142) 0:00:25.344 ***** 2025-09-23 07:30:49.163955 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:30:49.163966 | orchestrator | 2025-09-23 07:30:49.163976 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-23 07:30:49.163987 | orchestrator | Tuesday 23 September 2025 07:30:44 +0000 (0:00:00.123) 0:00:25.468 ***** 2025-09-23 07:30:49.163997 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:30:49.164008 | orchestrator | 2025-09-23 07:30:49.164018 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-23 07:30:49.164029 | orchestrator | Tuesday 23 September 2025 07:30:44 +0000 (0:00:00.134) 0:00:25.603 ***** 2025-09-23 07:30:49.164039 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:30:49.164051 | orchestrator | 2025-09-23 07:30:49.164061 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-23 07:30:49.164072 | orchestrator | Tuesday 23 September 2025 07:30:45 +0000 (0:00:00.117) 0:00:25.721 ***** 2025-09-23 07:30:49.164083 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7ede7e8c-1177-5738-bf30-f710eefa62dc'}}) 2025-09-23 07:30:49.164093 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6b345e42-d385-5c5d-ac31-471707d336a3'}}) 2025-09-23 07:30:49.164104 | orchestrator | 2025-09-23 07:30:49.164115 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-23 07:30:49.164145 | orchestrator | Tuesday 23 September 2025 07:30:45 +0000 (0:00:00.141) 0:00:25.862 ***** 2025-09-23 07:30:49.164192 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7ede7e8c-1177-5738-bf30-f710eefa62dc'}})  2025-09-23 07:30:49.164210 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6b345e42-d385-5c5d-ac31-471707d336a3'}})  2025-09-23 07:30:49.164222 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:30:49.164234 | orchestrator | 2025-09-23 07:30:49.164246 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-23 07:30:49.164259 | orchestrator | Tuesday 23 September 2025 07:30:45 +0000 (0:00:00.123) 0:00:25.985 ***** 2025-09-23 07:30:49.164271 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7ede7e8c-1177-5738-bf30-f710eefa62dc'}})  2025-09-23 07:30:49.164283 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6b345e42-d385-5c5d-ac31-471707d336a3'}})  2025-09-23 07:30:49.164295 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:30:49.164306 | orchestrator | 2025-09-23 07:30:49.164318 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-23 07:30:49.164331 | orchestrator | Tuesday 23 September 2025 07:30:45 +0000 (0:00:00.126) 0:00:26.112 ***** 2025-09-23 07:30:49.164342 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7ede7e8c-1177-5738-bf30-f710eefa62dc'}})  2025-09-23 07:30:49.164355 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6b345e42-d385-5c5d-ac31-471707d336a3'}})  2025-09-23 07:30:49.164367 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:30:49.164379 | orchestrator | 2025-09-23 07:30:49.164390 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-23 07:30:49.164403 | orchestrator | Tuesday 23 September 2025 07:30:45 +0000 (0:00:00.123) 0:00:26.235 ***** 2025-09-23 07:30:49.164415 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:30:49.164426 | orchestrator | 2025-09-23 07:30:49.164438 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-23 07:30:49.164450 | orchestrator | Tuesday 23 September 2025 07:30:45 +0000 (0:00:00.097) 0:00:26.333 ***** 2025-09-23 07:30:49.164462 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:30:49.164479 | orchestrator | 2025-09-23 07:30:49.164505 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-23 07:30:49.164528 | orchestrator | Tuesday 23 September 2025 07:30:45 +0000 (0:00:00.137) 0:00:26.470 ***** 2025-09-23 07:30:49.164546 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:30:49.164564 | orchestrator | 2025-09-23 07:30:49.164600 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-23 07:30:49.164617 | orchestrator | Tuesday 23 September 2025 07:30:45 +0000 (0:00:00.132) 0:00:26.603 ***** 2025-09-23 07:30:49.164635 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:30:49.164653 | orchestrator | 2025-09-23 07:30:49.164671 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-23 07:30:49.164691 | orchestrator | Tuesday 23 September 2025 07:30:46 +0000 (0:00:00.240) 0:00:26.844 ***** 2025-09-23 07:30:49.164709 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:30:49.164727 | orchestrator | 2025-09-23 07:30:49.164746 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-23 07:30:49.164757 | orchestrator | Tuesday 23 September 2025 07:30:46 +0000 (0:00:00.111) 0:00:26.955 ***** 2025-09-23 07:30:49.164768 | orchestrator | ok: [testbed-node-4] => { 2025-09-23 07:30:49.164779 | orchestrator |  "ceph_osd_devices": { 2025-09-23 07:30:49.164789 | orchestrator |  "sdb": { 2025-09-23 07:30:49.164800 | orchestrator |  "osd_lvm_uuid": "7ede7e8c-1177-5738-bf30-f710eefa62dc" 2025-09-23 07:30:49.164810 | orchestrator |  }, 2025-09-23 07:30:49.164821 | orchestrator |  "sdc": { 2025-09-23 07:30:49.164842 | orchestrator |  "osd_lvm_uuid": "6b345e42-d385-5c5d-ac31-471707d336a3" 2025-09-23 07:30:49.164853 | orchestrator |  } 2025-09-23 07:30:49.164863 | orchestrator |  } 2025-09-23 07:30:49.164874 | orchestrator | } 2025-09-23 07:30:49.164885 | orchestrator | 2025-09-23 07:30:49.164895 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-23 07:30:49.164906 | orchestrator | Tuesday 23 September 2025 07:30:46 +0000 (0:00:00.115) 0:00:27.070 ***** 2025-09-23 07:30:49.164916 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:30:49.164926 | orchestrator | 2025-09-23 07:30:49.164944 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-23 07:30:49.164955 | orchestrator | Tuesday 23 September 2025 07:30:46 +0000 (0:00:00.125) 0:00:27.195 ***** 2025-09-23 07:30:49.164966 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:30:49.164976 | orchestrator | 2025-09-23 07:30:49.164986 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-23 07:30:49.164997 | orchestrator | Tuesday 23 September 2025 07:30:46 +0000 (0:00:00.117) 0:00:27.313 ***** 2025-09-23 07:30:49.165007 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:30:49.165018 | orchestrator | 2025-09-23 07:30:49.165028 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-23 07:30:49.165039 | orchestrator | Tuesday 23 September 2025 07:30:46 +0000 (0:00:00.115) 0:00:27.428 ***** 2025-09-23 07:30:49.165049 | orchestrator | changed: [testbed-node-4] => { 2025-09-23 07:30:49.165060 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-23 07:30:49.165070 | orchestrator |  "ceph_osd_devices": { 2025-09-23 07:30:49.165081 | orchestrator |  "sdb": { 2025-09-23 07:30:49.165091 | orchestrator |  "osd_lvm_uuid": "7ede7e8c-1177-5738-bf30-f710eefa62dc" 2025-09-23 07:30:49.165107 | orchestrator |  }, 2025-09-23 07:30:49.165118 | orchestrator |  "sdc": { 2025-09-23 07:30:49.165128 | orchestrator |  "osd_lvm_uuid": "6b345e42-d385-5c5d-ac31-471707d336a3" 2025-09-23 07:30:49.165139 | orchestrator |  } 2025-09-23 07:30:49.165192 | orchestrator |  }, 2025-09-23 07:30:49.165207 | orchestrator |  "lvm_volumes": [ 2025-09-23 07:30:49.165218 | orchestrator |  { 2025-09-23 07:30:49.165229 | orchestrator |  "data": "osd-block-7ede7e8c-1177-5738-bf30-f710eefa62dc", 2025-09-23 07:30:49.165239 | orchestrator |  "data_vg": "ceph-7ede7e8c-1177-5738-bf30-f710eefa62dc" 2025-09-23 07:30:49.165250 | orchestrator |  }, 2025-09-23 07:30:49.165260 | orchestrator |  { 2025-09-23 07:30:49.165270 | orchestrator |  "data": "osd-block-6b345e42-d385-5c5d-ac31-471707d336a3", 2025-09-23 07:30:49.165281 | orchestrator |  "data_vg": "ceph-6b345e42-d385-5c5d-ac31-471707d336a3" 2025-09-23 07:30:49.165291 | orchestrator |  } 2025-09-23 07:30:49.165302 | orchestrator |  ] 2025-09-23 07:30:49.165312 | orchestrator |  } 2025-09-23 07:30:49.165323 | orchestrator | } 2025-09-23 07:30:49.165333 | orchestrator | 2025-09-23 07:30:49.165344 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-23 07:30:49.165354 | orchestrator | Tuesday 23 September 2025 07:30:46 +0000 (0:00:00.228) 0:00:27.656 ***** 2025-09-23 07:30:49.165365 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-23 07:30:49.165375 | orchestrator | 2025-09-23 07:30:49.165386 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-23 07:30:49.165396 | orchestrator | 2025-09-23 07:30:49.165407 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-23 07:30:49.165417 | orchestrator | Tuesday 23 September 2025 07:30:47 +0000 (0:00:00.953) 0:00:28.610 ***** 2025-09-23 07:30:49.165428 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-23 07:30:49.165438 | orchestrator | 2025-09-23 07:30:49.165460 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-23 07:30:49.165481 | orchestrator | Tuesday 23 September 2025 07:30:48 +0000 (0:00:00.386) 0:00:28.996 ***** 2025-09-23 07:30:49.165503 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:30:49.165514 | orchestrator | 2025-09-23 07:30:49.165525 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:49.165535 | orchestrator | Tuesday 23 September 2025 07:30:48 +0000 (0:00:00.496) 0:00:29.492 ***** 2025-09-23 07:30:49.165546 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-23 07:30:49.165557 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-23 07:30:49.165567 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-23 07:30:49.165578 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-23 07:30:49.165588 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-23 07:30:49.165599 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-23 07:30:49.165618 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-23 07:30:57.505435 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-23 07:30:57.505536 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-23 07:30:57.505552 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-23 07:30:57.505564 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-23 07:30:57.505575 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-23 07:30:57.505586 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-23 07:30:57.505597 | orchestrator | 2025-09-23 07:30:57.505609 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:57.505621 | orchestrator | Tuesday 23 September 2025 07:30:49 +0000 (0:00:00.358) 0:00:29.851 ***** 2025-09-23 07:30:57.505632 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:30:57.505644 | orchestrator | 2025-09-23 07:30:57.505655 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:57.505666 | orchestrator | Tuesday 23 September 2025 07:30:49 +0000 (0:00:00.182) 0:00:30.034 ***** 2025-09-23 07:30:57.505677 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:30:57.505687 | orchestrator | 2025-09-23 07:30:57.505698 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:57.505709 | orchestrator | Tuesday 23 September 2025 07:30:49 +0000 (0:00:00.176) 0:00:30.210 ***** 2025-09-23 07:30:57.505719 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:30:57.505730 | orchestrator | 2025-09-23 07:30:57.505741 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:57.505751 | orchestrator | Tuesday 23 September 2025 07:30:49 +0000 (0:00:00.184) 0:00:30.395 ***** 2025-09-23 07:30:57.505762 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:30:57.505773 | orchestrator | 2025-09-23 07:30:57.505783 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:57.505794 | orchestrator | Tuesday 23 September 2025 07:30:49 +0000 (0:00:00.171) 0:00:30.567 ***** 2025-09-23 07:30:57.505805 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:30:57.505815 | orchestrator | 2025-09-23 07:30:57.505826 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:57.505837 | orchestrator | Tuesday 23 September 2025 07:30:50 +0000 (0:00:00.166) 0:00:30.733 ***** 2025-09-23 07:30:57.505847 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:30:57.505858 | orchestrator | 2025-09-23 07:30:57.505868 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:57.505879 | orchestrator | Tuesday 23 September 2025 07:30:50 +0000 (0:00:00.181) 0:00:30.914 ***** 2025-09-23 07:30:57.505890 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:30:57.505925 | orchestrator | 2025-09-23 07:30:57.505937 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:57.505947 | orchestrator | Tuesday 23 September 2025 07:30:50 +0000 (0:00:00.188) 0:00:31.103 ***** 2025-09-23 07:30:57.505958 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:30:57.505971 | orchestrator | 2025-09-23 07:30:57.506002 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:57.506080 | orchestrator | Tuesday 23 September 2025 07:30:50 +0000 (0:00:00.177) 0:00:31.280 ***** 2025-09-23 07:30:57.506097 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269) 2025-09-23 07:30:57.506111 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269) 2025-09-23 07:30:57.506124 | orchestrator | 2025-09-23 07:30:57.506137 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:57.506177 | orchestrator | Tuesday 23 September 2025 07:30:51 +0000 (0:00:00.499) 0:00:31.780 ***** 2025-09-23 07:30:57.506189 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5c88e186-44c4-4f29-a716-3e862e71c173) 2025-09-23 07:30:57.506202 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5c88e186-44c4-4f29-a716-3e862e71c173) 2025-09-23 07:30:57.506214 | orchestrator | 2025-09-23 07:30:57.506226 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:57.506239 | orchestrator | Tuesday 23 September 2025 07:30:52 +0000 (0:00:00.967) 0:00:32.747 ***** 2025-09-23 07:30:57.506252 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b75d5c1f-0301-4e14-8d60-793226b090b6) 2025-09-23 07:30:57.506263 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b75d5c1f-0301-4e14-8d60-793226b090b6) 2025-09-23 07:30:57.506276 | orchestrator | 2025-09-23 07:30:57.506289 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:57.506302 | orchestrator | Tuesday 23 September 2025 07:30:52 +0000 (0:00:00.576) 0:00:33.324 ***** 2025-09-23 07:30:57.506314 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c2ff2f17-feac-486a-a8d3-f5343e47e8fb) 2025-09-23 07:30:57.506326 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c2ff2f17-feac-486a-a8d3-f5343e47e8fb) 2025-09-23 07:30:57.506337 | orchestrator | 2025-09-23 07:30:57.506347 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:30:57.506358 | orchestrator | Tuesday 23 September 2025 07:30:53 +0000 (0:00:00.442) 0:00:33.767 ***** 2025-09-23 07:30:57.506369 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-23 07:30:57.506380 | orchestrator | 2025-09-23 07:30:57.506390 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:57.506401 | orchestrator | Tuesday 23 September 2025 07:30:53 +0000 (0:00:00.339) 0:00:34.106 ***** 2025-09-23 07:30:57.506430 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-23 07:30:57.506441 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-23 07:30:57.506452 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-23 07:30:57.506462 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-23 07:30:57.506473 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-23 07:30:57.506483 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-23 07:30:57.506494 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-23 07:30:57.506504 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-23 07:30:57.506515 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-23 07:30:57.506536 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-23 07:30:57.506547 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-23 07:30:57.506557 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-23 07:30:57.506568 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-23 07:30:57.506578 | orchestrator | 2025-09-23 07:30:57.506589 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:57.506599 | orchestrator | Tuesday 23 September 2025 07:30:53 +0000 (0:00:00.383) 0:00:34.490 ***** 2025-09-23 07:30:57.506610 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:30:57.506621 | orchestrator | 2025-09-23 07:30:57.506631 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:57.506642 | orchestrator | Tuesday 23 September 2025 07:30:53 +0000 (0:00:00.197) 0:00:34.687 ***** 2025-09-23 07:30:57.506653 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:30:57.506663 | orchestrator | 2025-09-23 07:30:57.506674 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:57.506684 | orchestrator | Tuesday 23 September 2025 07:30:54 +0000 (0:00:00.196) 0:00:34.883 ***** 2025-09-23 07:30:57.506695 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:30:57.506706 | orchestrator | 2025-09-23 07:30:57.506716 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:57.506727 | orchestrator | Tuesday 23 September 2025 07:30:54 +0000 (0:00:00.198) 0:00:35.082 ***** 2025-09-23 07:30:57.506738 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:30:57.506748 | orchestrator | 2025-09-23 07:30:57.506759 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:57.506770 | orchestrator | Tuesday 23 September 2025 07:30:54 +0000 (0:00:00.220) 0:00:35.303 ***** 2025-09-23 07:30:57.506780 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:30:57.506791 | orchestrator | 2025-09-23 07:30:57.506802 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:57.506812 | orchestrator | Tuesday 23 September 2025 07:30:54 +0000 (0:00:00.205) 0:00:35.509 ***** 2025-09-23 07:30:57.506823 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:30:57.506833 | orchestrator | 2025-09-23 07:30:57.506844 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:57.506854 | orchestrator | Tuesday 23 September 2025 07:30:55 +0000 (0:00:00.697) 0:00:36.207 ***** 2025-09-23 07:30:57.506865 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:30:57.506876 | orchestrator | 2025-09-23 07:30:57.506886 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:57.506897 | orchestrator | Tuesday 23 September 2025 07:30:55 +0000 (0:00:00.207) 0:00:36.414 ***** 2025-09-23 07:30:57.506907 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:30:57.506918 | orchestrator | 2025-09-23 07:30:57.506929 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:57.506939 | orchestrator | Tuesday 23 September 2025 07:30:55 +0000 (0:00:00.210) 0:00:36.624 ***** 2025-09-23 07:30:57.506950 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-23 07:30:57.506961 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-23 07:30:57.506972 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-23 07:30:57.506983 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-23 07:30:57.506993 | orchestrator | 2025-09-23 07:30:57.507004 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:57.507015 | orchestrator | Tuesday 23 September 2025 07:30:56 +0000 (0:00:00.727) 0:00:37.352 ***** 2025-09-23 07:30:57.507025 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:30:57.507036 | orchestrator | 2025-09-23 07:30:57.507046 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:57.507063 | orchestrator | Tuesday 23 September 2025 07:30:56 +0000 (0:00:00.222) 0:00:37.575 ***** 2025-09-23 07:30:57.507074 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:30:57.507085 | orchestrator | 2025-09-23 07:30:57.507096 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:57.507106 | orchestrator | Tuesday 23 September 2025 07:30:57 +0000 (0:00:00.212) 0:00:37.787 ***** 2025-09-23 07:30:57.507117 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:30:57.507128 | orchestrator | 2025-09-23 07:30:57.507138 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:30:57.507169 | orchestrator | Tuesday 23 September 2025 07:30:57 +0000 (0:00:00.210) 0:00:37.998 ***** 2025-09-23 07:30:57.507187 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:30:57.507198 | orchestrator | 2025-09-23 07:30:57.507209 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-23 07:30:57.507226 | orchestrator | Tuesday 23 September 2025 07:30:57 +0000 (0:00:00.194) 0:00:38.192 ***** 2025-09-23 07:31:02.217097 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-09-23 07:31:02.217204 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-09-23 07:31:02.217219 | orchestrator | 2025-09-23 07:31:02.217232 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-23 07:31:02.217243 | orchestrator | Tuesday 23 September 2025 07:30:57 +0000 (0:00:00.182) 0:00:38.375 ***** 2025-09-23 07:31:02.217254 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:31:02.217266 | orchestrator | 2025-09-23 07:31:02.217277 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-23 07:31:02.217288 | orchestrator | Tuesday 23 September 2025 07:30:57 +0000 (0:00:00.133) 0:00:38.508 ***** 2025-09-23 07:31:02.217298 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:31:02.217309 | orchestrator | 2025-09-23 07:31:02.217320 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-23 07:31:02.217331 | orchestrator | Tuesday 23 September 2025 07:30:57 +0000 (0:00:00.133) 0:00:38.642 ***** 2025-09-23 07:31:02.217342 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:31:02.217352 | orchestrator | 2025-09-23 07:31:02.217363 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-23 07:31:02.217374 | orchestrator | Tuesday 23 September 2025 07:30:58 +0000 (0:00:00.145) 0:00:38.787 ***** 2025-09-23 07:31:02.217385 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:31:02.217396 | orchestrator | 2025-09-23 07:31:02.217407 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-23 07:31:02.217417 | orchestrator | Tuesday 23 September 2025 07:30:58 +0000 (0:00:00.413) 0:00:39.201 ***** 2025-09-23 07:31:02.217429 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4a27826e-7697-5dae-8bcf-65313ee63b58'}}) 2025-09-23 07:31:02.217441 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b31a677e-efd4-57fc-b4ad-0e2207d5fa48'}}) 2025-09-23 07:31:02.217452 | orchestrator | 2025-09-23 07:31:02.217463 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-23 07:31:02.217474 | orchestrator | Tuesday 23 September 2025 07:30:58 +0000 (0:00:00.206) 0:00:39.407 ***** 2025-09-23 07:31:02.217485 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4a27826e-7697-5dae-8bcf-65313ee63b58'}})  2025-09-23 07:31:02.217496 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b31a677e-efd4-57fc-b4ad-0e2207d5fa48'}})  2025-09-23 07:31:02.217507 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:31:02.217518 | orchestrator | 2025-09-23 07:31:02.217546 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-23 07:31:02.217566 | orchestrator | Tuesday 23 September 2025 07:30:58 +0000 (0:00:00.198) 0:00:39.605 ***** 2025-09-23 07:31:02.217586 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4a27826e-7697-5dae-8bcf-65313ee63b58'}})  2025-09-23 07:31:02.217624 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b31a677e-efd4-57fc-b4ad-0e2207d5fa48'}})  2025-09-23 07:31:02.217637 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:31:02.217648 | orchestrator | 2025-09-23 07:31:02.217658 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-23 07:31:02.217669 | orchestrator | Tuesday 23 September 2025 07:30:59 +0000 (0:00:00.211) 0:00:39.817 ***** 2025-09-23 07:31:02.217680 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4a27826e-7697-5dae-8bcf-65313ee63b58'}})  2025-09-23 07:31:02.217690 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b31a677e-efd4-57fc-b4ad-0e2207d5fa48'}})  2025-09-23 07:31:02.217701 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:31:02.217712 | orchestrator | 2025-09-23 07:31:02.217722 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-23 07:31:02.217733 | orchestrator | Tuesday 23 September 2025 07:30:59 +0000 (0:00:00.174) 0:00:39.992 ***** 2025-09-23 07:31:02.217744 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:31:02.217754 | orchestrator | 2025-09-23 07:31:02.217765 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-23 07:31:02.217775 | orchestrator | Tuesday 23 September 2025 07:30:59 +0000 (0:00:00.168) 0:00:40.160 ***** 2025-09-23 07:31:02.217786 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:31:02.217796 | orchestrator | 2025-09-23 07:31:02.217807 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-23 07:31:02.217817 | orchestrator | Tuesday 23 September 2025 07:30:59 +0000 (0:00:00.153) 0:00:40.314 ***** 2025-09-23 07:31:02.217828 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:31:02.217838 | orchestrator | 2025-09-23 07:31:02.217849 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-23 07:31:02.217859 | orchestrator | Tuesday 23 September 2025 07:30:59 +0000 (0:00:00.205) 0:00:40.520 ***** 2025-09-23 07:31:02.217870 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:31:02.217881 | orchestrator | 2025-09-23 07:31:02.217900 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-23 07:31:02.217924 | orchestrator | Tuesday 23 September 2025 07:30:59 +0000 (0:00:00.150) 0:00:40.670 ***** 2025-09-23 07:31:02.217948 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:31:02.217965 | orchestrator | 2025-09-23 07:31:02.217983 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-23 07:31:02.218001 | orchestrator | Tuesday 23 September 2025 07:31:00 +0000 (0:00:00.124) 0:00:40.794 ***** 2025-09-23 07:31:02.218079 | orchestrator | ok: [testbed-node-5] => { 2025-09-23 07:31:02.218103 | orchestrator |  "ceph_osd_devices": { 2025-09-23 07:31:02.218122 | orchestrator |  "sdb": { 2025-09-23 07:31:02.218141 | orchestrator |  "osd_lvm_uuid": "4a27826e-7697-5dae-8bcf-65313ee63b58" 2025-09-23 07:31:02.218207 | orchestrator |  }, 2025-09-23 07:31:02.218227 | orchestrator |  "sdc": { 2025-09-23 07:31:02.218242 | orchestrator |  "osd_lvm_uuid": "b31a677e-efd4-57fc-b4ad-0e2207d5fa48" 2025-09-23 07:31:02.218253 | orchestrator |  } 2025-09-23 07:31:02.218264 | orchestrator |  } 2025-09-23 07:31:02.218275 | orchestrator | } 2025-09-23 07:31:02.218286 | orchestrator | 2025-09-23 07:31:02.218297 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-23 07:31:02.218307 | orchestrator | Tuesday 23 September 2025 07:31:00 +0000 (0:00:00.144) 0:00:40.939 ***** 2025-09-23 07:31:02.218318 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:31:02.218329 | orchestrator | 2025-09-23 07:31:02.218340 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-23 07:31:02.218351 | orchestrator | Tuesday 23 September 2025 07:31:00 +0000 (0:00:00.140) 0:00:41.079 ***** 2025-09-23 07:31:02.218361 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:31:02.218372 | orchestrator | 2025-09-23 07:31:02.218383 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-23 07:31:02.218404 | orchestrator | Tuesday 23 September 2025 07:31:00 +0000 (0:00:00.535) 0:00:41.615 ***** 2025-09-23 07:31:02.218415 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:31:02.218426 | orchestrator | 2025-09-23 07:31:02.218437 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-23 07:31:02.218447 | orchestrator | Tuesday 23 September 2025 07:31:01 +0000 (0:00:00.178) 0:00:41.794 ***** 2025-09-23 07:31:02.218458 | orchestrator | changed: [testbed-node-5] => { 2025-09-23 07:31:02.218472 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-23 07:31:02.218491 | orchestrator |  "ceph_osd_devices": { 2025-09-23 07:31:02.218526 | orchestrator |  "sdb": { 2025-09-23 07:31:02.218561 | orchestrator |  "osd_lvm_uuid": "4a27826e-7697-5dae-8bcf-65313ee63b58" 2025-09-23 07:31:02.218577 | orchestrator |  }, 2025-09-23 07:31:02.218588 | orchestrator |  "sdc": { 2025-09-23 07:31:02.218599 | orchestrator |  "osd_lvm_uuid": "b31a677e-efd4-57fc-b4ad-0e2207d5fa48" 2025-09-23 07:31:02.218609 | orchestrator |  } 2025-09-23 07:31:02.218620 | orchestrator |  }, 2025-09-23 07:31:02.218631 | orchestrator |  "lvm_volumes": [ 2025-09-23 07:31:02.218641 | orchestrator |  { 2025-09-23 07:31:02.218652 | orchestrator |  "data": "osd-block-4a27826e-7697-5dae-8bcf-65313ee63b58", 2025-09-23 07:31:02.218663 | orchestrator |  "data_vg": "ceph-4a27826e-7697-5dae-8bcf-65313ee63b58" 2025-09-23 07:31:02.218674 | orchestrator |  }, 2025-09-23 07:31:02.218685 | orchestrator |  { 2025-09-23 07:31:02.218696 | orchestrator |  "data": "osd-block-b31a677e-efd4-57fc-b4ad-0e2207d5fa48", 2025-09-23 07:31:02.218707 | orchestrator |  "data_vg": "ceph-b31a677e-efd4-57fc-b4ad-0e2207d5fa48" 2025-09-23 07:31:02.218718 | orchestrator |  } 2025-09-23 07:31:02.218728 | orchestrator |  ] 2025-09-23 07:31:02.218739 | orchestrator |  } 2025-09-23 07:31:02.218753 | orchestrator | } 2025-09-23 07:31:02.218764 | orchestrator | 2025-09-23 07:31:02.218786 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-23 07:31:02.218798 | orchestrator | Tuesday 23 September 2025 07:31:01 +0000 (0:00:00.179) 0:00:41.973 ***** 2025-09-23 07:31:02.218816 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-23 07:31:02.218828 | orchestrator | 2025-09-23 07:31:02.218839 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:31:02.218861 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-23 07:31:02.218873 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-23 07:31:02.218884 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-23 07:31:02.218895 | orchestrator | 2025-09-23 07:31:02.218906 | orchestrator | 2025-09-23 07:31:02.218917 | orchestrator | 2025-09-23 07:31:02.218927 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:31:02.218943 | orchestrator | Tuesday 23 September 2025 07:31:02 +0000 (0:00:00.908) 0:00:42.881 ***** 2025-09-23 07:31:02.218964 | orchestrator | =============================================================================== 2025-09-23 07:31:02.218984 | orchestrator | Write configuration file ------------------------------------------------ 4.16s 2025-09-23 07:31:02.219003 | orchestrator | Add known partitions to the list of available block devices ------------- 1.20s 2025-09-23 07:31:02.219022 | orchestrator | Add known links to the list of available block devices ------------------ 1.18s 2025-09-23 07:31:02.219039 | orchestrator | Add known partitions to the list of available block devices ------------- 1.07s 2025-09-23 07:31:02.219056 | orchestrator | Get initial list of available block devices ----------------------------- 0.99s 2025-09-23 07:31:02.219085 | orchestrator | Add known links to the list of available block devices ------------------ 0.97s 2025-09-23 07:31:02.219102 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.92s 2025-09-23 07:31:02.219120 | orchestrator | Print DB devices -------------------------------------------------------- 0.78s 2025-09-23 07:31:02.219140 | orchestrator | Add known links to the list of available block devices ------------------ 0.76s 2025-09-23 07:31:02.219211 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.73s 2025-09-23 07:31:02.219233 | orchestrator | Add known partitions to the list of available block devices ------------- 0.73s 2025-09-23 07:31:02.219253 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2025-09-23 07:31:02.219273 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.71s 2025-09-23 07:31:02.219296 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.70s 2025-09-23 07:31:02.219330 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2025-09-23 07:31:02.481694 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2025-09-23 07:31:02.481779 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2025-09-23 07:31:02.481793 | orchestrator | Print configuration data ------------------------------------------------ 0.65s 2025-09-23 07:31:02.481805 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2025-09-23 07:31:02.481816 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2025-09-23 07:31:24.886492 | orchestrator | 2025-09-23 07:31:24 | INFO  | Task 854d339a-3706-49aa-9bb1-b2042b087b4c (sync inventory) is running in background. Output coming soon. 2025-09-23 07:31:50.780817 | orchestrator | 2025-09-23 07:31:26 | INFO  | Starting group_vars file reorganization 2025-09-23 07:31:50.780901 | orchestrator | 2025-09-23 07:31:26 | INFO  | Moved 0 file(s) to their respective directories 2025-09-23 07:31:50.780917 | orchestrator | 2025-09-23 07:31:26 | INFO  | Group_vars file reorganization completed 2025-09-23 07:31:50.780928 | orchestrator | 2025-09-23 07:31:28 | INFO  | Starting variable preparation from inventory 2025-09-23 07:31:50.780940 | orchestrator | 2025-09-23 07:31:32 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-09-23 07:31:50.780952 | orchestrator | 2025-09-23 07:31:32 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-09-23 07:31:50.780963 | orchestrator | 2025-09-23 07:31:32 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-09-23 07:31:50.780975 | orchestrator | 2025-09-23 07:31:32 | INFO  | 3 file(s) written, 6 host(s) processed 2025-09-23 07:31:50.780987 | orchestrator | 2025-09-23 07:31:32 | INFO  | Variable preparation completed 2025-09-23 07:31:50.780998 | orchestrator | 2025-09-23 07:31:33 | INFO  | Starting inventory overwrite handling 2025-09-23 07:31:50.781010 | orchestrator | 2025-09-23 07:31:33 | INFO  | Handling group overwrites in 99-overwrite 2025-09-23 07:31:50.781022 | orchestrator | 2025-09-23 07:31:33 | INFO  | Removing group frr:children from 60-generic 2025-09-23 07:31:50.781033 | orchestrator | 2025-09-23 07:31:33 | INFO  | Removing group storage:children from 50-kolla 2025-09-23 07:31:50.781045 | orchestrator | 2025-09-23 07:31:33 | INFO  | Removing group netbird:children from 50-infrastructure 2025-09-23 07:31:50.781056 | orchestrator | 2025-09-23 07:31:33 | INFO  | Removing group ceph-rgw from 50-ceph 2025-09-23 07:31:50.781068 | orchestrator | 2025-09-23 07:31:33 | INFO  | Removing group ceph-mds from 50-ceph 2025-09-23 07:31:50.781079 | orchestrator | 2025-09-23 07:31:33 | INFO  | Handling group overwrites in 20-roles 2025-09-23 07:31:50.781090 | orchestrator | 2025-09-23 07:31:33 | INFO  | Removing group k3s_node from 50-infrastructure 2025-09-23 07:31:50.781126 | orchestrator | 2025-09-23 07:31:33 | INFO  | Removed 6 group(s) in total 2025-09-23 07:31:50.781138 | orchestrator | 2025-09-23 07:31:33 | INFO  | Inventory overwrite handling completed 2025-09-23 07:31:50.781188 | orchestrator | 2025-09-23 07:31:34 | INFO  | Starting merge of inventory files 2025-09-23 07:31:50.781199 | orchestrator | 2025-09-23 07:31:34 | INFO  | Inventory files merged successfully 2025-09-23 07:31:50.781209 | orchestrator | 2025-09-23 07:31:39 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-09-23 07:31:50.781220 | orchestrator | 2025-09-23 07:31:49 | INFO  | Successfully wrote ClusterShell configuration 2025-09-23 07:31:50.781231 | orchestrator | [master 09e4c87] 2025-09-23-07-31 2025-09-23 07:31:50.781243 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-09-23 07:31:52.686565 | orchestrator | 2025-09-23 07:31:52 | INFO  | Task 16619847-d61a-48e4-9f38-4276336ddb40 (ceph-create-lvm-devices) was prepared for execution. 2025-09-23 07:31:52.686648 | orchestrator | 2025-09-23 07:31:52 | INFO  | It takes a moment until task 16619847-d61a-48e4-9f38-4276336ddb40 (ceph-create-lvm-devices) has been started and output is visible here. 2025-09-23 07:32:04.553265 | orchestrator | 2025-09-23 07:32:04.553369 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-23 07:32:04.553385 | orchestrator | 2025-09-23 07:32:04.553398 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-23 07:32:04.553409 | orchestrator | Tuesday 23 September 2025 07:31:56 +0000 (0:00:00.286) 0:00:00.286 ***** 2025-09-23 07:32:04.553421 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-23 07:32:04.553432 | orchestrator | 2025-09-23 07:32:04.553444 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-23 07:32:04.553455 | orchestrator | Tuesday 23 September 2025 07:31:56 +0000 (0:00:00.213) 0:00:00.500 ***** 2025-09-23 07:32:04.553466 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:32:04.553478 | orchestrator | 2025-09-23 07:32:04.553489 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:04.553500 | orchestrator | Tuesday 23 September 2025 07:31:57 +0000 (0:00:00.214) 0:00:00.714 ***** 2025-09-23 07:32:04.553511 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-23 07:32:04.553524 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-23 07:32:04.553534 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-23 07:32:04.553545 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-23 07:32:04.553556 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-23 07:32:04.553567 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-23 07:32:04.553577 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-23 07:32:04.553588 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-23 07:32:04.553599 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-23 07:32:04.553610 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-23 07:32:04.553621 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-23 07:32:04.553632 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-23 07:32:04.553643 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-23 07:32:04.553653 | orchestrator | 2025-09-23 07:32:04.553664 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:04.553701 | orchestrator | Tuesday 23 September 2025 07:31:57 +0000 (0:00:00.382) 0:00:01.096 ***** 2025-09-23 07:32:04.553715 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:04.553728 | orchestrator | 2025-09-23 07:32:04.553741 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:04.553771 | orchestrator | Tuesday 23 September 2025 07:31:58 +0000 (0:00:00.429) 0:00:01.525 ***** 2025-09-23 07:32:04.553784 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:04.553796 | orchestrator | 2025-09-23 07:32:04.553810 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:04.553823 | orchestrator | Tuesday 23 September 2025 07:31:58 +0000 (0:00:00.169) 0:00:01.695 ***** 2025-09-23 07:32:04.553840 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:04.553853 | orchestrator | 2025-09-23 07:32:04.553866 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:04.553878 | orchestrator | Tuesday 23 September 2025 07:31:58 +0000 (0:00:00.171) 0:00:01.867 ***** 2025-09-23 07:32:04.553890 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:04.553903 | orchestrator | 2025-09-23 07:32:04.553916 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:04.553929 | orchestrator | Tuesday 23 September 2025 07:31:58 +0000 (0:00:00.168) 0:00:02.035 ***** 2025-09-23 07:32:04.553941 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:04.553954 | orchestrator | 2025-09-23 07:32:04.553967 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:04.553980 | orchestrator | Tuesday 23 September 2025 07:31:58 +0000 (0:00:00.173) 0:00:02.208 ***** 2025-09-23 07:32:04.553992 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:04.554005 | orchestrator | 2025-09-23 07:32:04.554080 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:04.554095 | orchestrator | Tuesday 23 September 2025 07:31:58 +0000 (0:00:00.203) 0:00:02.412 ***** 2025-09-23 07:32:04.554109 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:04.554121 | orchestrator | 2025-09-23 07:32:04.554132 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:04.554166 | orchestrator | Tuesday 23 September 2025 07:31:59 +0000 (0:00:00.212) 0:00:02.625 ***** 2025-09-23 07:32:04.554178 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:04.554189 | orchestrator | 2025-09-23 07:32:04.554200 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:04.554210 | orchestrator | Tuesday 23 September 2025 07:31:59 +0000 (0:00:00.212) 0:00:02.837 ***** 2025-09-23 07:32:04.554221 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604) 2025-09-23 07:32:04.554233 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604) 2025-09-23 07:32:04.554244 | orchestrator | 2025-09-23 07:32:04.554255 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:04.554265 | orchestrator | Tuesday 23 September 2025 07:31:59 +0000 (0:00:00.416) 0:00:03.254 ***** 2025-09-23 07:32:04.554296 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c90ab8a7-6741-4b53-9264-08db4b9d41dd) 2025-09-23 07:32:04.554307 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c90ab8a7-6741-4b53-9264-08db4b9d41dd) 2025-09-23 07:32:04.554318 | orchestrator | 2025-09-23 07:32:04.554329 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:04.554339 | orchestrator | Tuesday 23 September 2025 07:32:00 +0000 (0:00:00.471) 0:00:03.725 ***** 2025-09-23 07:32:04.554350 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_59088487-bcaf-4b18-9006-b2b85c395676) 2025-09-23 07:32:04.554360 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_59088487-bcaf-4b18-9006-b2b85c395676) 2025-09-23 07:32:04.554371 | orchestrator | 2025-09-23 07:32:04.554382 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:04.554402 | orchestrator | Tuesday 23 September 2025 07:32:00 +0000 (0:00:00.682) 0:00:04.408 ***** 2025-09-23 07:32:04.554413 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7c71f819-4704-4446-9599-7b21db8e3013) 2025-09-23 07:32:04.554424 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7c71f819-4704-4446-9599-7b21db8e3013) 2025-09-23 07:32:04.554435 | orchestrator | 2025-09-23 07:32:04.554445 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:04.554456 | orchestrator | Tuesday 23 September 2025 07:32:01 +0000 (0:00:00.997) 0:00:05.405 ***** 2025-09-23 07:32:04.554466 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-23 07:32:04.554477 | orchestrator | 2025-09-23 07:32:04.554487 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:32:04.554498 | orchestrator | Tuesday 23 September 2025 07:32:02 +0000 (0:00:00.404) 0:00:05.810 ***** 2025-09-23 07:32:04.554508 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-23 07:32:04.554519 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-23 07:32:04.554529 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-23 07:32:04.554540 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-23 07:32:04.554550 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-23 07:32:04.554561 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-23 07:32:04.554571 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-23 07:32:04.554582 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-23 07:32:04.554592 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-23 07:32:04.554602 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-23 07:32:04.554613 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-23 07:32:04.554623 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-23 07:32:04.554634 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-23 07:32:04.554644 | orchestrator | 2025-09-23 07:32:04.554655 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:32:04.554666 | orchestrator | Tuesday 23 September 2025 07:32:02 +0000 (0:00:00.453) 0:00:06.264 ***** 2025-09-23 07:32:04.554676 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:04.554687 | orchestrator | 2025-09-23 07:32:04.554697 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:32:04.554708 | orchestrator | Tuesday 23 September 2025 07:32:02 +0000 (0:00:00.221) 0:00:06.485 ***** 2025-09-23 07:32:04.554719 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:04.554729 | orchestrator | 2025-09-23 07:32:04.554740 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:32:04.554750 | orchestrator | Tuesday 23 September 2025 07:32:03 +0000 (0:00:00.204) 0:00:06.689 ***** 2025-09-23 07:32:04.554761 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:04.554771 | orchestrator | 2025-09-23 07:32:04.554789 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:32:04.554809 | orchestrator | Tuesday 23 September 2025 07:32:03 +0000 (0:00:00.260) 0:00:06.950 ***** 2025-09-23 07:32:04.554830 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:04.554848 | orchestrator | 2025-09-23 07:32:04.554866 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:32:04.554894 | orchestrator | Tuesday 23 September 2025 07:32:03 +0000 (0:00:00.215) 0:00:07.165 ***** 2025-09-23 07:32:04.554912 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:04.554932 | orchestrator | 2025-09-23 07:32:04.554950 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:32:04.554969 | orchestrator | Tuesday 23 September 2025 07:32:03 +0000 (0:00:00.223) 0:00:07.389 ***** 2025-09-23 07:32:04.554981 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:04.554992 | orchestrator | 2025-09-23 07:32:04.555002 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:32:04.555013 | orchestrator | Tuesday 23 September 2025 07:32:04 +0000 (0:00:00.225) 0:00:07.614 ***** 2025-09-23 07:32:04.555023 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:04.555034 | orchestrator | 2025-09-23 07:32:04.555045 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:32:04.555055 | orchestrator | Tuesday 23 September 2025 07:32:04 +0000 (0:00:00.234) 0:00:07.849 ***** 2025-09-23 07:32:04.555076 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:14.390940 | orchestrator | 2025-09-23 07:32:14.391023 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:32:14.391039 | orchestrator | Tuesday 23 September 2025 07:32:04 +0000 (0:00:00.215) 0:00:08.064 ***** 2025-09-23 07:32:14.391051 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-23 07:32:14.391062 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-23 07:32:14.391073 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-23 07:32:14.391083 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-23 07:32:14.391094 | orchestrator | 2025-09-23 07:32:14.391104 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:32:14.391115 | orchestrator | Tuesday 23 September 2025 07:32:06 +0000 (0:00:01.468) 0:00:09.533 ***** 2025-09-23 07:32:14.391126 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:14.391136 | orchestrator | 2025-09-23 07:32:14.391200 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:32:14.391211 | orchestrator | Tuesday 23 September 2025 07:32:06 +0000 (0:00:00.315) 0:00:09.848 ***** 2025-09-23 07:32:14.391222 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:14.391233 | orchestrator | 2025-09-23 07:32:14.391243 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:32:14.391254 | orchestrator | Tuesday 23 September 2025 07:32:06 +0000 (0:00:00.262) 0:00:10.111 ***** 2025-09-23 07:32:14.391276 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:14.391287 | orchestrator | 2025-09-23 07:32:14.391298 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:32:14.391309 | orchestrator | Tuesday 23 September 2025 07:32:06 +0000 (0:00:00.277) 0:00:10.388 ***** 2025-09-23 07:32:14.391320 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:14.391330 | orchestrator | 2025-09-23 07:32:14.391341 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-23 07:32:14.391351 | orchestrator | Tuesday 23 September 2025 07:32:07 +0000 (0:00:00.242) 0:00:10.631 ***** 2025-09-23 07:32:14.391362 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:14.391373 | orchestrator | 2025-09-23 07:32:14.391383 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-23 07:32:14.391393 | orchestrator | Tuesday 23 September 2025 07:32:07 +0000 (0:00:00.166) 0:00:10.797 ***** 2025-09-23 07:32:14.391404 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fa3e03eb-2d2a-5719-835a-39fedcc9009f'}}) 2025-09-23 07:32:14.391415 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0570cb7e-4d0f-57ea-8b12-da850e205fc7'}}) 2025-09-23 07:32:14.391426 | orchestrator | 2025-09-23 07:32:14.391437 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-23 07:32:14.391447 | orchestrator | Tuesday 23 September 2025 07:32:07 +0000 (0:00:00.202) 0:00:11.000 ***** 2025-09-23 07:32:14.391482 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-fa3e03eb-2d2a-5719-835a-39fedcc9009f', 'data_vg': 'ceph-fa3e03eb-2d2a-5719-835a-39fedcc9009f'}) 2025-09-23 07:32:14.391495 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-0570cb7e-4d0f-57ea-8b12-da850e205fc7', 'data_vg': 'ceph-0570cb7e-4d0f-57ea-8b12-da850e205fc7'}) 2025-09-23 07:32:14.391508 | orchestrator | 2025-09-23 07:32:14.391535 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-23 07:32:14.391552 | orchestrator | Tuesday 23 September 2025 07:32:09 +0000 (0:00:02.101) 0:00:13.102 ***** 2025-09-23 07:32:14.391565 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fa3e03eb-2d2a-5719-835a-39fedcc9009f', 'data_vg': 'ceph-fa3e03eb-2d2a-5719-835a-39fedcc9009f'})  2025-09-23 07:32:14.391578 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0570cb7e-4d0f-57ea-8b12-da850e205fc7', 'data_vg': 'ceph-0570cb7e-4d0f-57ea-8b12-da850e205fc7'})  2025-09-23 07:32:14.391590 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:14.391602 | orchestrator | 2025-09-23 07:32:14.391614 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-23 07:32:14.391627 | orchestrator | Tuesday 23 September 2025 07:32:09 +0000 (0:00:00.181) 0:00:13.283 ***** 2025-09-23 07:32:14.391639 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-fa3e03eb-2d2a-5719-835a-39fedcc9009f', 'data_vg': 'ceph-fa3e03eb-2d2a-5719-835a-39fedcc9009f'}) 2025-09-23 07:32:14.391652 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-0570cb7e-4d0f-57ea-8b12-da850e205fc7', 'data_vg': 'ceph-0570cb7e-4d0f-57ea-8b12-da850e205fc7'}) 2025-09-23 07:32:14.391664 | orchestrator | 2025-09-23 07:32:14.391676 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-23 07:32:14.391688 | orchestrator | Tuesday 23 September 2025 07:32:12 +0000 (0:00:02.461) 0:00:15.745 ***** 2025-09-23 07:32:14.391700 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fa3e03eb-2d2a-5719-835a-39fedcc9009f', 'data_vg': 'ceph-fa3e03eb-2d2a-5719-835a-39fedcc9009f'})  2025-09-23 07:32:14.391713 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0570cb7e-4d0f-57ea-8b12-da850e205fc7', 'data_vg': 'ceph-0570cb7e-4d0f-57ea-8b12-da850e205fc7'})  2025-09-23 07:32:14.391725 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:14.391738 | orchestrator | 2025-09-23 07:32:14.391751 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-23 07:32:14.391763 | orchestrator | Tuesday 23 September 2025 07:32:12 +0000 (0:00:00.167) 0:00:15.912 ***** 2025-09-23 07:32:14.391776 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:14.391788 | orchestrator | 2025-09-23 07:32:14.391800 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-23 07:32:14.391829 | orchestrator | Tuesday 23 September 2025 07:32:12 +0000 (0:00:00.170) 0:00:16.082 ***** 2025-09-23 07:32:14.391841 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fa3e03eb-2d2a-5719-835a-39fedcc9009f', 'data_vg': 'ceph-fa3e03eb-2d2a-5719-835a-39fedcc9009f'})  2025-09-23 07:32:14.391854 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0570cb7e-4d0f-57ea-8b12-da850e205fc7', 'data_vg': 'ceph-0570cb7e-4d0f-57ea-8b12-da850e205fc7'})  2025-09-23 07:32:14.391865 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:14.391876 | orchestrator | 2025-09-23 07:32:14.391886 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-23 07:32:14.391897 | orchestrator | Tuesday 23 September 2025 07:32:12 +0000 (0:00:00.285) 0:00:16.368 ***** 2025-09-23 07:32:14.391907 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:14.391918 | orchestrator | 2025-09-23 07:32:14.391928 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-23 07:32:14.391939 | orchestrator | Tuesday 23 September 2025 07:32:12 +0000 (0:00:00.131) 0:00:16.499 ***** 2025-09-23 07:32:14.391949 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fa3e03eb-2d2a-5719-835a-39fedcc9009f', 'data_vg': 'ceph-fa3e03eb-2d2a-5719-835a-39fedcc9009f'})  2025-09-23 07:32:14.391967 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0570cb7e-4d0f-57ea-8b12-da850e205fc7', 'data_vg': 'ceph-0570cb7e-4d0f-57ea-8b12-da850e205fc7'})  2025-09-23 07:32:14.391978 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:14.391988 | orchestrator | 2025-09-23 07:32:14.391999 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-23 07:32:14.392009 | orchestrator | Tuesday 23 September 2025 07:32:13 +0000 (0:00:00.171) 0:00:16.670 ***** 2025-09-23 07:32:14.392020 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:14.392030 | orchestrator | 2025-09-23 07:32:14.392041 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-23 07:32:14.392052 | orchestrator | Tuesday 23 September 2025 07:32:13 +0000 (0:00:00.158) 0:00:16.829 ***** 2025-09-23 07:32:14.392062 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fa3e03eb-2d2a-5719-835a-39fedcc9009f', 'data_vg': 'ceph-fa3e03eb-2d2a-5719-835a-39fedcc9009f'})  2025-09-23 07:32:14.392073 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0570cb7e-4d0f-57ea-8b12-da850e205fc7', 'data_vg': 'ceph-0570cb7e-4d0f-57ea-8b12-da850e205fc7'})  2025-09-23 07:32:14.392083 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:14.392094 | orchestrator | 2025-09-23 07:32:14.392105 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-23 07:32:14.392115 | orchestrator | Tuesday 23 September 2025 07:32:13 +0000 (0:00:00.157) 0:00:16.986 ***** 2025-09-23 07:32:14.392126 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:32:14.392137 | orchestrator | 2025-09-23 07:32:14.392165 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-23 07:32:14.392176 | orchestrator | Tuesday 23 September 2025 07:32:13 +0000 (0:00:00.119) 0:00:17.105 ***** 2025-09-23 07:32:14.392191 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fa3e03eb-2d2a-5719-835a-39fedcc9009f', 'data_vg': 'ceph-fa3e03eb-2d2a-5719-835a-39fedcc9009f'})  2025-09-23 07:32:14.392202 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0570cb7e-4d0f-57ea-8b12-da850e205fc7', 'data_vg': 'ceph-0570cb7e-4d0f-57ea-8b12-da850e205fc7'})  2025-09-23 07:32:14.392213 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:14.392224 | orchestrator | 2025-09-23 07:32:14.392234 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-23 07:32:14.392245 | orchestrator | Tuesday 23 September 2025 07:32:13 +0000 (0:00:00.174) 0:00:17.280 ***** 2025-09-23 07:32:14.392256 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fa3e03eb-2d2a-5719-835a-39fedcc9009f', 'data_vg': 'ceph-fa3e03eb-2d2a-5719-835a-39fedcc9009f'})  2025-09-23 07:32:14.392266 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0570cb7e-4d0f-57ea-8b12-da850e205fc7', 'data_vg': 'ceph-0570cb7e-4d0f-57ea-8b12-da850e205fc7'})  2025-09-23 07:32:14.392277 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:14.392287 | orchestrator | 2025-09-23 07:32:14.392298 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-23 07:32:14.392309 | orchestrator | Tuesday 23 September 2025 07:32:13 +0000 (0:00:00.129) 0:00:17.410 ***** 2025-09-23 07:32:14.392319 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fa3e03eb-2d2a-5719-835a-39fedcc9009f', 'data_vg': 'ceph-fa3e03eb-2d2a-5719-835a-39fedcc9009f'})  2025-09-23 07:32:14.392330 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0570cb7e-4d0f-57ea-8b12-da850e205fc7', 'data_vg': 'ceph-0570cb7e-4d0f-57ea-8b12-da850e205fc7'})  2025-09-23 07:32:14.392341 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:14.392352 | orchestrator | 2025-09-23 07:32:14.392362 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-23 07:32:14.392373 | orchestrator | Tuesday 23 September 2025 07:32:14 +0000 (0:00:00.160) 0:00:17.570 ***** 2025-09-23 07:32:14.392384 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:14.392400 | orchestrator | 2025-09-23 07:32:14.392411 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-23 07:32:14.392422 | orchestrator | Tuesday 23 September 2025 07:32:14 +0000 (0:00:00.190) 0:00:17.760 ***** 2025-09-23 07:32:14.392433 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:14.392443 | orchestrator | 2025-09-23 07:32:14.392460 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-23 07:32:20.773366 | orchestrator | Tuesday 23 September 2025 07:32:14 +0000 (0:00:00.142) 0:00:17.902 ***** 2025-09-23 07:32:20.773456 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:20.773470 | orchestrator | 2025-09-23 07:32:20.773481 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-23 07:32:20.773491 | orchestrator | Tuesday 23 September 2025 07:32:14 +0000 (0:00:00.135) 0:00:18.037 ***** 2025-09-23 07:32:20.773502 | orchestrator | ok: [testbed-node-3] => { 2025-09-23 07:32:20.773512 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-23 07:32:20.773522 | orchestrator | } 2025-09-23 07:32:20.773532 | orchestrator | 2025-09-23 07:32:20.773542 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-23 07:32:20.773552 | orchestrator | Tuesday 23 September 2025 07:32:14 +0000 (0:00:00.276) 0:00:18.314 ***** 2025-09-23 07:32:20.773562 | orchestrator | ok: [testbed-node-3] => { 2025-09-23 07:32:20.773572 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-23 07:32:20.773581 | orchestrator | } 2025-09-23 07:32:20.773591 | orchestrator | 2025-09-23 07:32:20.773601 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-23 07:32:20.773610 | orchestrator | Tuesday 23 September 2025 07:32:14 +0000 (0:00:00.140) 0:00:18.455 ***** 2025-09-23 07:32:20.773620 | orchestrator | ok: [testbed-node-3] => { 2025-09-23 07:32:20.773630 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-23 07:32:20.773639 | orchestrator | } 2025-09-23 07:32:20.773654 | orchestrator | 2025-09-23 07:32:20.773671 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-23 07:32:20.773689 | orchestrator | Tuesday 23 September 2025 07:32:15 +0000 (0:00:00.136) 0:00:18.591 ***** 2025-09-23 07:32:20.773705 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:32:20.773719 | orchestrator | 2025-09-23 07:32:20.773729 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-23 07:32:20.773739 | orchestrator | Tuesday 23 September 2025 07:32:15 +0000 (0:00:00.664) 0:00:19.256 ***** 2025-09-23 07:32:20.773748 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:32:20.773758 | orchestrator | 2025-09-23 07:32:20.773767 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-23 07:32:20.773776 | orchestrator | Tuesday 23 September 2025 07:32:16 +0000 (0:00:00.508) 0:00:19.764 ***** 2025-09-23 07:32:20.773786 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:32:20.773795 | orchestrator | 2025-09-23 07:32:20.773805 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-23 07:32:20.773814 | orchestrator | Tuesday 23 September 2025 07:32:16 +0000 (0:00:00.535) 0:00:20.300 ***** 2025-09-23 07:32:20.773824 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:32:20.773833 | orchestrator | 2025-09-23 07:32:20.773843 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-23 07:32:20.773852 | orchestrator | Tuesday 23 September 2025 07:32:16 +0000 (0:00:00.161) 0:00:20.461 ***** 2025-09-23 07:32:20.773862 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:20.773871 | orchestrator | 2025-09-23 07:32:20.773881 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-23 07:32:20.773890 | orchestrator | Tuesday 23 September 2025 07:32:17 +0000 (0:00:00.109) 0:00:20.571 ***** 2025-09-23 07:32:20.773900 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:20.773911 | orchestrator | 2025-09-23 07:32:20.773922 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-23 07:32:20.773933 | orchestrator | Tuesday 23 September 2025 07:32:17 +0000 (0:00:00.107) 0:00:20.678 ***** 2025-09-23 07:32:20.773970 | orchestrator | ok: [testbed-node-3] => { 2025-09-23 07:32:20.773982 | orchestrator |  "vgs_report": { 2025-09-23 07:32:20.773993 | orchestrator |  "vg": [] 2025-09-23 07:32:20.774005 | orchestrator |  } 2025-09-23 07:32:20.774067 | orchestrator | } 2025-09-23 07:32:20.774079 | orchestrator | 2025-09-23 07:32:20.774090 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-23 07:32:20.774101 | orchestrator | Tuesday 23 September 2025 07:32:17 +0000 (0:00:00.156) 0:00:20.834 ***** 2025-09-23 07:32:20.774112 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:20.774123 | orchestrator | 2025-09-23 07:32:20.774133 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-23 07:32:20.774165 | orchestrator | Tuesday 23 September 2025 07:32:17 +0000 (0:00:00.144) 0:00:20.979 ***** 2025-09-23 07:32:20.774177 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:20.774187 | orchestrator | 2025-09-23 07:32:20.774198 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-23 07:32:20.774209 | orchestrator | Tuesday 23 September 2025 07:32:17 +0000 (0:00:00.140) 0:00:21.119 ***** 2025-09-23 07:32:20.774219 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:20.774231 | orchestrator | 2025-09-23 07:32:20.774242 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-23 07:32:20.774253 | orchestrator | Tuesday 23 September 2025 07:32:17 +0000 (0:00:00.348) 0:00:21.468 ***** 2025-09-23 07:32:20.774264 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:20.774273 | orchestrator | 2025-09-23 07:32:20.774283 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-23 07:32:20.774293 | orchestrator | Tuesday 23 September 2025 07:32:18 +0000 (0:00:00.148) 0:00:21.616 ***** 2025-09-23 07:32:20.774302 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:20.774312 | orchestrator | 2025-09-23 07:32:20.774337 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-23 07:32:20.774347 | orchestrator | Tuesday 23 September 2025 07:32:18 +0000 (0:00:00.147) 0:00:21.764 ***** 2025-09-23 07:32:20.774356 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:20.774366 | orchestrator | 2025-09-23 07:32:20.774375 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-23 07:32:20.774385 | orchestrator | Tuesday 23 September 2025 07:32:18 +0000 (0:00:00.138) 0:00:21.902 ***** 2025-09-23 07:32:20.774394 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:20.774404 | orchestrator | 2025-09-23 07:32:20.774413 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-23 07:32:20.774423 | orchestrator | Tuesday 23 September 2025 07:32:18 +0000 (0:00:00.140) 0:00:22.042 ***** 2025-09-23 07:32:20.774432 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:20.774441 | orchestrator | 2025-09-23 07:32:20.774451 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-23 07:32:20.774476 | orchestrator | Tuesday 23 September 2025 07:32:18 +0000 (0:00:00.158) 0:00:22.200 ***** 2025-09-23 07:32:20.774486 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:20.774495 | orchestrator | 2025-09-23 07:32:20.774505 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-23 07:32:20.774514 | orchestrator | Tuesday 23 September 2025 07:32:18 +0000 (0:00:00.156) 0:00:22.357 ***** 2025-09-23 07:32:20.774523 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:20.774533 | orchestrator | 2025-09-23 07:32:20.774542 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-23 07:32:20.774551 | orchestrator | Tuesday 23 September 2025 07:32:18 +0000 (0:00:00.147) 0:00:22.505 ***** 2025-09-23 07:32:20.774561 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:20.774570 | orchestrator | 2025-09-23 07:32:20.774580 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-23 07:32:20.774589 | orchestrator | Tuesday 23 September 2025 07:32:19 +0000 (0:00:00.142) 0:00:22.647 ***** 2025-09-23 07:32:20.774598 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:20.774608 | orchestrator | 2025-09-23 07:32:20.774626 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-23 07:32:20.774635 | orchestrator | Tuesday 23 September 2025 07:32:19 +0000 (0:00:00.142) 0:00:22.789 ***** 2025-09-23 07:32:20.774645 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:20.774654 | orchestrator | 2025-09-23 07:32:20.774664 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-23 07:32:20.774673 | orchestrator | Tuesday 23 September 2025 07:32:19 +0000 (0:00:00.151) 0:00:22.941 ***** 2025-09-23 07:32:20.774683 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:20.774692 | orchestrator | 2025-09-23 07:32:20.774701 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-23 07:32:20.774711 | orchestrator | Tuesday 23 September 2025 07:32:19 +0000 (0:00:00.146) 0:00:23.087 ***** 2025-09-23 07:32:20.774721 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fa3e03eb-2d2a-5719-835a-39fedcc9009f', 'data_vg': 'ceph-fa3e03eb-2d2a-5719-835a-39fedcc9009f'})  2025-09-23 07:32:20.774732 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0570cb7e-4d0f-57ea-8b12-da850e205fc7', 'data_vg': 'ceph-0570cb7e-4d0f-57ea-8b12-da850e205fc7'})  2025-09-23 07:32:20.774742 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:20.774751 | orchestrator | 2025-09-23 07:32:20.774761 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-23 07:32:20.774770 | orchestrator | Tuesday 23 September 2025 07:32:19 +0000 (0:00:00.371) 0:00:23.458 ***** 2025-09-23 07:32:20.774780 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fa3e03eb-2d2a-5719-835a-39fedcc9009f', 'data_vg': 'ceph-fa3e03eb-2d2a-5719-835a-39fedcc9009f'})  2025-09-23 07:32:20.774789 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0570cb7e-4d0f-57ea-8b12-da850e205fc7', 'data_vg': 'ceph-0570cb7e-4d0f-57ea-8b12-da850e205fc7'})  2025-09-23 07:32:20.774799 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:20.774808 | orchestrator | 2025-09-23 07:32:20.774818 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-23 07:32:20.774827 | orchestrator | Tuesday 23 September 2025 07:32:20 +0000 (0:00:00.154) 0:00:23.613 ***** 2025-09-23 07:32:20.774841 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fa3e03eb-2d2a-5719-835a-39fedcc9009f', 'data_vg': 'ceph-fa3e03eb-2d2a-5719-835a-39fedcc9009f'})  2025-09-23 07:32:20.774851 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0570cb7e-4d0f-57ea-8b12-da850e205fc7', 'data_vg': 'ceph-0570cb7e-4d0f-57ea-8b12-da850e205fc7'})  2025-09-23 07:32:20.774861 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:20.774870 | orchestrator | 2025-09-23 07:32:20.774880 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-23 07:32:20.774889 | orchestrator | Tuesday 23 September 2025 07:32:20 +0000 (0:00:00.180) 0:00:23.793 ***** 2025-09-23 07:32:20.774899 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fa3e03eb-2d2a-5719-835a-39fedcc9009f', 'data_vg': 'ceph-fa3e03eb-2d2a-5719-835a-39fedcc9009f'})  2025-09-23 07:32:20.774908 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0570cb7e-4d0f-57ea-8b12-da850e205fc7', 'data_vg': 'ceph-0570cb7e-4d0f-57ea-8b12-da850e205fc7'})  2025-09-23 07:32:20.774918 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:20.774927 | orchestrator | 2025-09-23 07:32:20.774937 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-23 07:32:20.774946 | orchestrator | Tuesday 23 September 2025 07:32:20 +0000 (0:00:00.166) 0:00:23.959 ***** 2025-09-23 07:32:20.774956 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fa3e03eb-2d2a-5719-835a-39fedcc9009f', 'data_vg': 'ceph-fa3e03eb-2d2a-5719-835a-39fedcc9009f'})  2025-09-23 07:32:20.774965 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0570cb7e-4d0f-57ea-8b12-da850e205fc7', 'data_vg': 'ceph-0570cb7e-4d0f-57ea-8b12-da850e205fc7'})  2025-09-23 07:32:20.774975 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:20.774990 | orchestrator | 2025-09-23 07:32:20.775000 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-23 07:32:20.775009 | orchestrator | Tuesday 23 September 2025 07:32:20 +0000 (0:00:00.181) 0:00:24.140 ***** 2025-09-23 07:32:20.775019 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fa3e03eb-2d2a-5719-835a-39fedcc9009f', 'data_vg': 'ceph-fa3e03eb-2d2a-5719-835a-39fedcc9009f'})  2025-09-23 07:32:20.775033 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0570cb7e-4d0f-57ea-8b12-da850e205fc7', 'data_vg': 'ceph-0570cb7e-4d0f-57ea-8b12-da850e205fc7'})  2025-09-23 07:32:26.915313 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:26.915405 | orchestrator | 2025-09-23 07:32:26.915420 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-23 07:32:26.915433 | orchestrator | Tuesday 23 September 2025 07:32:20 +0000 (0:00:00.148) 0:00:24.289 ***** 2025-09-23 07:32:26.915444 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fa3e03eb-2d2a-5719-835a-39fedcc9009f', 'data_vg': 'ceph-fa3e03eb-2d2a-5719-835a-39fedcc9009f'})  2025-09-23 07:32:26.915457 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0570cb7e-4d0f-57ea-8b12-da850e205fc7', 'data_vg': 'ceph-0570cb7e-4d0f-57ea-8b12-da850e205fc7'})  2025-09-23 07:32:26.915468 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:26.915479 | orchestrator | 2025-09-23 07:32:26.915490 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-23 07:32:26.915501 | orchestrator | Tuesday 23 September 2025 07:32:20 +0000 (0:00:00.154) 0:00:24.443 ***** 2025-09-23 07:32:26.915512 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fa3e03eb-2d2a-5719-835a-39fedcc9009f', 'data_vg': 'ceph-fa3e03eb-2d2a-5719-835a-39fedcc9009f'})  2025-09-23 07:32:26.915523 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0570cb7e-4d0f-57ea-8b12-da850e205fc7', 'data_vg': 'ceph-0570cb7e-4d0f-57ea-8b12-da850e205fc7'})  2025-09-23 07:32:26.915533 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:26.915545 | orchestrator | 2025-09-23 07:32:26.915555 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-23 07:32:26.915566 | orchestrator | Tuesday 23 September 2025 07:32:21 +0000 (0:00:00.149) 0:00:24.593 ***** 2025-09-23 07:32:26.915577 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:32:26.915588 | orchestrator | 2025-09-23 07:32:26.915599 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-23 07:32:26.915609 | orchestrator | Tuesday 23 September 2025 07:32:21 +0000 (0:00:00.508) 0:00:25.102 ***** 2025-09-23 07:32:26.915620 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:32:26.915631 | orchestrator | 2025-09-23 07:32:26.915641 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-23 07:32:26.915652 | orchestrator | Tuesday 23 September 2025 07:32:22 +0000 (0:00:00.532) 0:00:25.634 ***** 2025-09-23 07:32:26.915662 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:32:26.915673 | orchestrator | 2025-09-23 07:32:26.915683 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-23 07:32:26.915694 | orchestrator | Tuesday 23 September 2025 07:32:22 +0000 (0:00:00.168) 0:00:25.803 ***** 2025-09-23 07:32:26.915705 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-0570cb7e-4d0f-57ea-8b12-da850e205fc7', 'vg_name': 'ceph-0570cb7e-4d0f-57ea-8b12-da850e205fc7'}) 2025-09-23 07:32:26.915717 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-fa3e03eb-2d2a-5719-835a-39fedcc9009f', 'vg_name': 'ceph-fa3e03eb-2d2a-5719-835a-39fedcc9009f'}) 2025-09-23 07:32:26.915728 | orchestrator | 2025-09-23 07:32:26.915739 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-23 07:32:26.915749 | orchestrator | Tuesday 23 September 2025 07:32:22 +0000 (0:00:00.211) 0:00:26.015 ***** 2025-09-23 07:32:26.915760 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fa3e03eb-2d2a-5719-835a-39fedcc9009f', 'data_vg': 'ceph-fa3e03eb-2d2a-5719-835a-39fedcc9009f'})  2025-09-23 07:32:26.915797 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0570cb7e-4d0f-57ea-8b12-da850e205fc7', 'data_vg': 'ceph-0570cb7e-4d0f-57ea-8b12-da850e205fc7'})  2025-09-23 07:32:26.915809 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:26.915820 | orchestrator | 2025-09-23 07:32:26.915830 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-23 07:32:26.915841 | orchestrator | Tuesday 23 September 2025 07:32:23 +0000 (0:00:00.523) 0:00:26.539 ***** 2025-09-23 07:32:26.915851 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fa3e03eb-2d2a-5719-835a-39fedcc9009f', 'data_vg': 'ceph-fa3e03eb-2d2a-5719-835a-39fedcc9009f'})  2025-09-23 07:32:26.915862 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0570cb7e-4d0f-57ea-8b12-da850e205fc7', 'data_vg': 'ceph-0570cb7e-4d0f-57ea-8b12-da850e205fc7'})  2025-09-23 07:32:26.915873 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:26.915884 | orchestrator | 2025-09-23 07:32:26.915895 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-23 07:32:26.915905 | orchestrator | Tuesday 23 September 2025 07:32:23 +0000 (0:00:00.186) 0:00:26.725 ***** 2025-09-23 07:32:26.915917 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fa3e03eb-2d2a-5719-835a-39fedcc9009f', 'data_vg': 'ceph-fa3e03eb-2d2a-5719-835a-39fedcc9009f'})  2025-09-23 07:32:26.915928 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0570cb7e-4d0f-57ea-8b12-da850e205fc7', 'data_vg': 'ceph-0570cb7e-4d0f-57ea-8b12-da850e205fc7'})  2025-09-23 07:32:26.915939 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:32:26.915949 | orchestrator | 2025-09-23 07:32:26.915960 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-23 07:32:26.915971 | orchestrator | Tuesday 23 September 2025 07:32:23 +0000 (0:00:00.191) 0:00:26.917 ***** 2025-09-23 07:32:26.915982 | orchestrator | ok: [testbed-node-3] => { 2025-09-23 07:32:26.915993 | orchestrator |  "lvm_report": { 2025-09-23 07:32:26.916004 | orchestrator |  "lv": [ 2025-09-23 07:32:26.916014 | orchestrator |  { 2025-09-23 07:32:26.916042 | orchestrator |  "lv_name": "osd-block-0570cb7e-4d0f-57ea-8b12-da850e205fc7", 2025-09-23 07:32:26.916054 | orchestrator |  "vg_name": "ceph-0570cb7e-4d0f-57ea-8b12-da850e205fc7" 2025-09-23 07:32:26.916065 | orchestrator |  }, 2025-09-23 07:32:26.916075 | orchestrator |  { 2025-09-23 07:32:26.916086 | orchestrator |  "lv_name": "osd-block-fa3e03eb-2d2a-5719-835a-39fedcc9009f", 2025-09-23 07:32:26.916096 | orchestrator |  "vg_name": "ceph-fa3e03eb-2d2a-5719-835a-39fedcc9009f" 2025-09-23 07:32:26.916107 | orchestrator |  } 2025-09-23 07:32:26.916118 | orchestrator |  ], 2025-09-23 07:32:26.916128 | orchestrator |  "pv": [ 2025-09-23 07:32:26.916161 | orchestrator |  { 2025-09-23 07:32:26.916173 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-23 07:32:26.916184 | orchestrator |  "vg_name": "ceph-fa3e03eb-2d2a-5719-835a-39fedcc9009f" 2025-09-23 07:32:26.916194 | orchestrator |  }, 2025-09-23 07:32:26.916205 | orchestrator |  { 2025-09-23 07:32:26.916216 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-23 07:32:26.916226 | orchestrator |  "vg_name": "ceph-0570cb7e-4d0f-57ea-8b12-da850e205fc7" 2025-09-23 07:32:26.916237 | orchestrator |  } 2025-09-23 07:32:26.916248 | orchestrator |  ] 2025-09-23 07:32:26.916258 | orchestrator |  } 2025-09-23 07:32:26.916269 | orchestrator | } 2025-09-23 07:32:26.916280 | orchestrator | 2025-09-23 07:32:26.916291 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-23 07:32:26.916301 | orchestrator | 2025-09-23 07:32:26.916312 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-23 07:32:26.916323 | orchestrator | Tuesday 23 September 2025 07:32:23 +0000 (0:00:00.385) 0:00:27.302 ***** 2025-09-23 07:32:26.916334 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-23 07:32:26.916354 | orchestrator | 2025-09-23 07:32:26.916365 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-23 07:32:26.916375 | orchestrator | Tuesday 23 September 2025 07:32:24 +0000 (0:00:00.387) 0:00:27.690 ***** 2025-09-23 07:32:26.916386 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:32:26.916397 | orchestrator | 2025-09-23 07:32:26.916408 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:26.916419 | orchestrator | Tuesday 23 September 2025 07:32:24 +0000 (0:00:00.358) 0:00:28.049 ***** 2025-09-23 07:32:26.916447 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-23 07:32:26.916459 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-23 07:32:26.916469 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-23 07:32:26.916480 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-23 07:32:26.916491 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-23 07:32:26.916502 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-23 07:32:26.916512 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-23 07:32:26.916527 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-23 07:32:26.916538 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-23 07:32:26.916549 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-23 07:32:26.916560 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-23 07:32:26.916570 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-23 07:32:26.916581 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-23 07:32:26.916592 | orchestrator | 2025-09-23 07:32:26.916603 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:26.916613 | orchestrator | Tuesday 23 September 2025 07:32:24 +0000 (0:00:00.441) 0:00:28.490 ***** 2025-09-23 07:32:26.916624 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:26.916635 | orchestrator | 2025-09-23 07:32:26.916645 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:26.916656 | orchestrator | Tuesday 23 September 2025 07:32:25 +0000 (0:00:00.207) 0:00:28.698 ***** 2025-09-23 07:32:26.916667 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:26.916677 | orchestrator | 2025-09-23 07:32:26.916688 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:26.916699 | orchestrator | Tuesday 23 September 2025 07:32:25 +0000 (0:00:00.193) 0:00:28.891 ***** 2025-09-23 07:32:26.916709 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:26.916720 | orchestrator | 2025-09-23 07:32:26.916731 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:26.916742 | orchestrator | Tuesday 23 September 2025 07:32:26 +0000 (0:00:00.672) 0:00:29.564 ***** 2025-09-23 07:32:26.916752 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:26.916763 | orchestrator | 2025-09-23 07:32:26.916774 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:26.916785 | orchestrator | Tuesday 23 September 2025 07:32:26 +0000 (0:00:00.201) 0:00:29.766 ***** 2025-09-23 07:32:26.916795 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:26.916806 | orchestrator | 2025-09-23 07:32:26.916817 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:26.916828 | orchestrator | Tuesday 23 September 2025 07:32:26 +0000 (0:00:00.200) 0:00:29.966 ***** 2025-09-23 07:32:26.916838 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:26.916849 | orchestrator | 2025-09-23 07:32:26.916867 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:26.916878 | orchestrator | Tuesday 23 September 2025 07:32:26 +0000 (0:00:00.202) 0:00:30.169 ***** 2025-09-23 07:32:26.916888 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:26.916899 | orchestrator | 2025-09-23 07:32:26.916917 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:37.728118 | orchestrator | Tuesday 23 September 2025 07:32:26 +0000 (0:00:00.256) 0:00:30.426 ***** 2025-09-23 07:32:37.728274 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:37.728293 | orchestrator | 2025-09-23 07:32:37.728306 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:37.728317 | orchestrator | Tuesday 23 September 2025 07:32:27 +0000 (0:00:00.214) 0:00:30.640 ***** 2025-09-23 07:32:37.728328 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7) 2025-09-23 07:32:37.728340 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7) 2025-09-23 07:32:37.728351 | orchestrator | 2025-09-23 07:32:37.728362 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:37.728373 | orchestrator | Tuesday 23 September 2025 07:32:27 +0000 (0:00:00.424) 0:00:31.065 ***** 2025-09-23 07:32:37.728383 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0bff4510-9eaf-4f53-bf1a-5cee4a2246ec) 2025-09-23 07:32:37.728394 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0bff4510-9eaf-4f53-bf1a-5cee4a2246ec) 2025-09-23 07:32:37.728405 | orchestrator | 2025-09-23 07:32:37.728415 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:37.728426 | orchestrator | Tuesday 23 September 2025 07:32:27 +0000 (0:00:00.429) 0:00:31.494 ***** 2025-09-23 07:32:37.728436 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_fd6a0863-0d42-4019-9e23-eb994da62dbd) 2025-09-23 07:32:37.728447 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_fd6a0863-0d42-4019-9e23-eb994da62dbd) 2025-09-23 07:32:37.728472 | orchestrator | 2025-09-23 07:32:37.728484 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:37.728505 | orchestrator | Tuesday 23 September 2025 07:32:28 +0000 (0:00:00.418) 0:00:31.913 ***** 2025-09-23 07:32:37.728516 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_87ebb364-ac90-40d8-a46a-ebfab3ab7b91) 2025-09-23 07:32:37.728527 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_87ebb364-ac90-40d8-a46a-ebfab3ab7b91) 2025-09-23 07:32:37.728538 | orchestrator | 2025-09-23 07:32:37.728548 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:37.728559 | orchestrator | Tuesday 23 September 2025 07:32:28 +0000 (0:00:00.458) 0:00:32.371 ***** 2025-09-23 07:32:37.728570 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-23 07:32:37.728580 | orchestrator | 2025-09-23 07:32:37.728591 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:32:37.728601 | orchestrator | Tuesday 23 September 2025 07:32:29 +0000 (0:00:00.334) 0:00:32.706 ***** 2025-09-23 07:32:37.728612 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-23 07:32:37.728641 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-23 07:32:37.728653 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-23 07:32:37.728665 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-23 07:32:37.728678 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-23 07:32:37.728691 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-23 07:32:37.728704 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-23 07:32:37.728739 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-23 07:32:37.728752 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-23 07:32:37.728765 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-23 07:32:37.728777 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-23 07:32:37.728787 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-23 07:32:37.728798 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-23 07:32:37.728808 | orchestrator | 2025-09-23 07:32:37.728818 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:32:37.728829 | orchestrator | Tuesday 23 September 2025 07:32:29 +0000 (0:00:00.648) 0:00:33.354 ***** 2025-09-23 07:32:37.728840 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:37.728850 | orchestrator | 2025-09-23 07:32:37.728861 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:32:37.728871 | orchestrator | Tuesday 23 September 2025 07:32:30 +0000 (0:00:00.256) 0:00:33.611 ***** 2025-09-23 07:32:37.728882 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:37.728893 | orchestrator | 2025-09-23 07:32:37.728904 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:32:37.728914 | orchestrator | Tuesday 23 September 2025 07:32:30 +0000 (0:00:00.209) 0:00:33.821 ***** 2025-09-23 07:32:37.728925 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:37.728935 | orchestrator | 2025-09-23 07:32:37.728946 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:32:37.728956 | orchestrator | Tuesday 23 September 2025 07:32:30 +0000 (0:00:00.186) 0:00:34.008 ***** 2025-09-23 07:32:37.728967 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:37.728978 | orchestrator | 2025-09-23 07:32:37.729007 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:32:37.729019 | orchestrator | Tuesday 23 September 2025 07:32:30 +0000 (0:00:00.198) 0:00:34.206 ***** 2025-09-23 07:32:37.729030 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:37.729040 | orchestrator | 2025-09-23 07:32:37.729051 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:32:37.729062 | orchestrator | Tuesday 23 September 2025 07:32:30 +0000 (0:00:00.236) 0:00:34.443 ***** 2025-09-23 07:32:37.729072 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:37.729083 | orchestrator | 2025-09-23 07:32:37.729094 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:32:37.729104 | orchestrator | Tuesday 23 September 2025 07:32:31 +0000 (0:00:00.212) 0:00:34.655 ***** 2025-09-23 07:32:37.729115 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:37.729126 | orchestrator | 2025-09-23 07:32:37.729136 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:32:37.729166 | orchestrator | Tuesday 23 September 2025 07:32:31 +0000 (0:00:00.203) 0:00:34.859 ***** 2025-09-23 07:32:37.729177 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:37.729188 | orchestrator | 2025-09-23 07:32:37.729199 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:32:37.729209 | orchestrator | Tuesday 23 September 2025 07:32:31 +0000 (0:00:00.210) 0:00:35.070 ***** 2025-09-23 07:32:37.729220 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-23 07:32:37.729231 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-23 07:32:37.729242 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-23 07:32:37.729253 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-23 07:32:37.729264 | orchestrator | 2025-09-23 07:32:37.729275 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:32:37.729286 | orchestrator | Tuesday 23 September 2025 07:32:32 +0000 (0:00:01.202) 0:00:36.272 ***** 2025-09-23 07:32:37.729305 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:37.729316 | orchestrator | 2025-09-23 07:32:37.729327 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:32:37.729338 | orchestrator | Tuesday 23 September 2025 07:32:32 +0000 (0:00:00.221) 0:00:36.493 ***** 2025-09-23 07:32:37.729348 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:37.729359 | orchestrator | 2025-09-23 07:32:37.729370 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:32:37.729380 | orchestrator | Tuesday 23 September 2025 07:32:33 +0000 (0:00:00.223) 0:00:36.717 ***** 2025-09-23 07:32:37.729391 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:37.729401 | orchestrator | 2025-09-23 07:32:37.729412 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:32:37.729423 | orchestrator | Tuesday 23 September 2025 07:32:33 +0000 (0:00:00.657) 0:00:37.374 ***** 2025-09-23 07:32:37.729433 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:37.729444 | orchestrator | 2025-09-23 07:32:37.729455 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-23 07:32:37.729465 | orchestrator | Tuesday 23 September 2025 07:32:34 +0000 (0:00:00.205) 0:00:37.580 ***** 2025-09-23 07:32:37.729476 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:37.729487 | orchestrator | 2025-09-23 07:32:37.729498 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-23 07:32:37.729508 | orchestrator | Tuesday 23 September 2025 07:32:34 +0000 (0:00:00.147) 0:00:37.727 ***** 2025-09-23 07:32:37.729519 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7ede7e8c-1177-5738-bf30-f710eefa62dc'}}) 2025-09-23 07:32:37.729530 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6b345e42-d385-5c5d-ac31-471707d336a3'}}) 2025-09-23 07:32:37.729541 | orchestrator | 2025-09-23 07:32:37.729551 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-23 07:32:37.729562 | orchestrator | Tuesday 23 September 2025 07:32:34 +0000 (0:00:00.207) 0:00:37.934 ***** 2025-09-23 07:32:37.729573 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7ede7e8c-1177-5738-bf30-f710eefa62dc', 'data_vg': 'ceph-7ede7e8c-1177-5738-bf30-f710eefa62dc'}) 2025-09-23 07:32:37.729585 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6b345e42-d385-5c5d-ac31-471707d336a3', 'data_vg': 'ceph-6b345e42-d385-5c5d-ac31-471707d336a3'}) 2025-09-23 07:32:37.729596 | orchestrator | 2025-09-23 07:32:37.729606 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-23 07:32:37.729617 | orchestrator | Tuesday 23 September 2025 07:32:36 +0000 (0:00:01.844) 0:00:39.779 ***** 2025-09-23 07:32:37.729628 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ede7e8c-1177-5738-bf30-f710eefa62dc', 'data_vg': 'ceph-7ede7e8c-1177-5738-bf30-f710eefa62dc'})  2025-09-23 07:32:37.729639 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6b345e42-d385-5c5d-ac31-471707d336a3', 'data_vg': 'ceph-6b345e42-d385-5c5d-ac31-471707d336a3'})  2025-09-23 07:32:37.729650 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:37.729661 | orchestrator | 2025-09-23 07:32:37.729671 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-23 07:32:37.729682 | orchestrator | Tuesday 23 September 2025 07:32:36 +0000 (0:00:00.167) 0:00:39.946 ***** 2025-09-23 07:32:37.729693 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7ede7e8c-1177-5738-bf30-f710eefa62dc', 'data_vg': 'ceph-7ede7e8c-1177-5738-bf30-f710eefa62dc'}) 2025-09-23 07:32:37.729704 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6b345e42-d385-5c5d-ac31-471707d336a3', 'data_vg': 'ceph-6b345e42-d385-5c5d-ac31-471707d336a3'}) 2025-09-23 07:32:37.729714 | orchestrator | 2025-09-23 07:32:37.729732 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-23 07:32:43.614786 | orchestrator | Tuesday 23 September 2025 07:32:37 +0000 (0:00:01.293) 0:00:41.239 ***** 2025-09-23 07:32:43.614908 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ede7e8c-1177-5738-bf30-f710eefa62dc', 'data_vg': 'ceph-7ede7e8c-1177-5738-bf30-f710eefa62dc'})  2025-09-23 07:32:43.614925 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6b345e42-d385-5c5d-ac31-471707d336a3', 'data_vg': 'ceph-6b345e42-d385-5c5d-ac31-471707d336a3'})  2025-09-23 07:32:43.614937 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:43.614949 | orchestrator | 2025-09-23 07:32:43.614961 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-23 07:32:43.614973 | orchestrator | Tuesday 23 September 2025 07:32:37 +0000 (0:00:00.180) 0:00:41.420 ***** 2025-09-23 07:32:43.614983 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:43.614994 | orchestrator | 2025-09-23 07:32:43.615005 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-23 07:32:43.615016 | orchestrator | Tuesday 23 September 2025 07:32:38 +0000 (0:00:00.132) 0:00:41.552 ***** 2025-09-23 07:32:43.615027 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ede7e8c-1177-5738-bf30-f710eefa62dc', 'data_vg': 'ceph-7ede7e8c-1177-5738-bf30-f710eefa62dc'})  2025-09-23 07:32:43.615054 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6b345e42-d385-5c5d-ac31-471707d336a3', 'data_vg': 'ceph-6b345e42-d385-5c5d-ac31-471707d336a3'})  2025-09-23 07:32:43.615065 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:43.615076 | orchestrator | 2025-09-23 07:32:43.615086 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-23 07:32:43.615097 | orchestrator | Tuesday 23 September 2025 07:32:38 +0000 (0:00:00.229) 0:00:41.781 ***** 2025-09-23 07:32:43.615108 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:43.615118 | orchestrator | 2025-09-23 07:32:43.615129 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-23 07:32:43.615194 | orchestrator | Tuesday 23 September 2025 07:32:38 +0000 (0:00:00.140) 0:00:41.921 ***** 2025-09-23 07:32:43.615208 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ede7e8c-1177-5738-bf30-f710eefa62dc', 'data_vg': 'ceph-7ede7e8c-1177-5738-bf30-f710eefa62dc'})  2025-09-23 07:32:43.615219 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6b345e42-d385-5c5d-ac31-471707d336a3', 'data_vg': 'ceph-6b345e42-d385-5c5d-ac31-471707d336a3'})  2025-09-23 07:32:43.615229 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:43.615240 | orchestrator | 2025-09-23 07:32:43.615250 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-23 07:32:43.615261 | orchestrator | Tuesday 23 September 2025 07:32:38 +0000 (0:00:00.149) 0:00:42.071 ***** 2025-09-23 07:32:43.615277 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:43.615288 | orchestrator | 2025-09-23 07:32:43.615298 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-23 07:32:43.615308 | orchestrator | Tuesday 23 September 2025 07:32:38 +0000 (0:00:00.359) 0:00:42.430 ***** 2025-09-23 07:32:43.615319 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ede7e8c-1177-5738-bf30-f710eefa62dc', 'data_vg': 'ceph-7ede7e8c-1177-5738-bf30-f710eefa62dc'})  2025-09-23 07:32:43.615332 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6b345e42-d385-5c5d-ac31-471707d336a3', 'data_vg': 'ceph-6b345e42-d385-5c5d-ac31-471707d336a3'})  2025-09-23 07:32:43.615344 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:43.615356 | orchestrator | 2025-09-23 07:32:43.615368 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-23 07:32:43.615380 | orchestrator | Tuesday 23 September 2025 07:32:39 +0000 (0:00:00.183) 0:00:42.613 ***** 2025-09-23 07:32:43.615392 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:32:43.615404 | orchestrator | 2025-09-23 07:32:43.615416 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-23 07:32:43.615428 | orchestrator | Tuesday 23 September 2025 07:32:39 +0000 (0:00:00.148) 0:00:42.761 ***** 2025-09-23 07:32:43.615448 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ede7e8c-1177-5738-bf30-f710eefa62dc', 'data_vg': 'ceph-7ede7e8c-1177-5738-bf30-f710eefa62dc'})  2025-09-23 07:32:43.615462 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6b345e42-d385-5c5d-ac31-471707d336a3', 'data_vg': 'ceph-6b345e42-d385-5c5d-ac31-471707d336a3'})  2025-09-23 07:32:43.615474 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:43.615486 | orchestrator | 2025-09-23 07:32:43.615499 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-23 07:32:43.615511 | orchestrator | Tuesday 23 September 2025 07:32:39 +0000 (0:00:00.179) 0:00:42.941 ***** 2025-09-23 07:32:43.615524 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ede7e8c-1177-5738-bf30-f710eefa62dc', 'data_vg': 'ceph-7ede7e8c-1177-5738-bf30-f710eefa62dc'})  2025-09-23 07:32:43.615536 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6b345e42-d385-5c5d-ac31-471707d336a3', 'data_vg': 'ceph-6b345e42-d385-5c5d-ac31-471707d336a3'})  2025-09-23 07:32:43.615549 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:43.615561 | orchestrator | 2025-09-23 07:32:43.615573 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-23 07:32:43.615585 | orchestrator | Tuesday 23 September 2025 07:32:39 +0000 (0:00:00.224) 0:00:43.166 ***** 2025-09-23 07:32:43.615615 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ede7e8c-1177-5738-bf30-f710eefa62dc', 'data_vg': 'ceph-7ede7e8c-1177-5738-bf30-f710eefa62dc'})  2025-09-23 07:32:43.615629 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6b345e42-d385-5c5d-ac31-471707d336a3', 'data_vg': 'ceph-6b345e42-d385-5c5d-ac31-471707d336a3'})  2025-09-23 07:32:43.615641 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:43.615653 | orchestrator | 2025-09-23 07:32:43.615664 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-23 07:32:43.615675 | orchestrator | Tuesday 23 September 2025 07:32:39 +0000 (0:00:00.181) 0:00:43.348 ***** 2025-09-23 07:32:43.615685 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:43.615696 | orchestrator | 2025-09-23 07:32:43.615707 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-23 07:32:43.615717 | orchestrator | Tuesday 23 September 2025 07:32:39 +0000 (0:00:00.122) 0:00:43.470 ***** 2025-09-23 07:32:43.615728 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:43.615739 | orchestrator | 2025-09-23 07:32:43.615750 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-23 07:32:43.615760 | orchestrator | Tuesday 23 September 2025 07:32:40 +0000 (0:00:00.139) 0:00:43.609 ***** 2025-09-23 07:32:43.615771 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:43.615782 | orchestrator | 2025-09-23 07:32:43.615793 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-23 07:32:43.615803 | orchestrator | Tuesday 23 September 2025 07:32:40 +0000 (0:00:00.138) 0:00:43.748 ***** 2025-09-23 07:32:43.615814 | orchestrator | ok: [testbed-node-4] => { 2025-09-23 07:32:43.615825 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-23 07:32:43.615836 | orchestrator | } 2025-09-23 07:32:43.615847 | orchestrator | 2025-09-23 07:32:43.615857 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-23 07:32:43.615868 | orchestrator | Tuesday 23 September 2025 07:32:40 +0000 (0:00:00.132) 0:00:43.880 ***** 2025-09-23 07:32:43.615878 | orchestrator | ok: [testbed-node-4] => { 2025-09-23 07:32:43.615889 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-23 07:32:43.615900 | orchestrator | } 2025-09-23 07:32:43.615921 | orchestrator | 2025-09-23 07:32:43.615932 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-23 07:32:43.615943 | orchestrator | Tuesday 23 September 2025 07:32:40 +0000 (0:00:00.192) 0:00:44.073 ***** 2025-09-23 07:32:43.615953 | orchestrator | ok: [testbed-node-4] => { 2025-09-23 07:32:43.615964 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-23 07:32:43.615982 | orchestrator | } 2025-09-23 07:32:43.615992 | orchestrator | 2025-09-23 07:32:43.616003 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-23 07:32:43.616014 | orchestrator | Tuesday 23 September 2025 07:32:40 +0000 (0:00:00.144) 0:00:44.218 ***** 2025-09-23 07:32:43.616024 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:32:43.616035 | orchestrator | 2025-09-23 07:32:43.616046 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-23 07:32:43.616056 | orchestrator | Tuesday 23 September 2025 07:32:41 +0000 (0:00:00.721) 0:00:44.939 ***** 2025-09-23 07:32:43.616072 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:32:43.616083 | orchestrator | 2025-09-23 07:32:43.616094 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-23 07:32:43.616105 | orchestrator | Tuesday 23 September 2025 07:32:41 +0000 (0:00:00.512) 0:00:45.451 ***** 2025-09-23 07:32:43.616115 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:32:43.616126 | orchestrator | 2025-09-23 07:32:43.616163 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-23 07:32:43.616175 | orchestrator | Tuesday 23 September 2025 07:32:42 +0000 (0:00:00.554) 0:00:46.006 ***** 2025-09-23 07:32:43.616186 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:32:43.616197 | orchestrator | 2025-09-23 07:32:43.616207 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-23 07:32:43.616218 | orchestrator | Tuesday 23 September 2025 07:32:42 +0000 (0:00:00.156) 0:00:46.162 ***** 2025-09-23 07:32:43.616229 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:43.616239 | orchestrator | 2025-09-23 07:32:43.616250 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-23 07:32:43.616261 | orchestrator | Tuesday 23 September 2025 07:32:42 +0000 (0:00:00.116) 0:00:46.279 ***** 2025-09-23 07:32:43.616271 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:43.616282 | orchestrator | 2025-09-23 07:32:43.616293 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-23 07:32:43.616303 | orchestrator | Tuesday 23 September 2025 07:32:42 +0000 (0:00:00.129) 0:00:46.409 ***** 2025-09-23 07:32:43.616314 | orchestrator | ok: [testbed-node-4] => { 2025-09-23 07:32:43.616325 | orchestrator |  "vgs_report": { 2025-09-23 07:32:43.616336 | orchestrator |  "vg": [] 2025-09-23 07:32:43.616347 | orchestrator |  } 2025-09-23 07:32:43.616358 | orchestrator | } 2025-09-23 07:32:43.616369 | orchestrator | 2025-09-23 07:32:43.616379 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-23 07:32:43.616390 | orchestrator | Tuesday 23 September 2025 07:32:43 +0000 (0:00:00.146) 0:00:46.556 ***** 2025-09-23 07:32:43.616400 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:43.616411 | orchestrator | 2025-09-23 07:32:43.616422 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-23 07:32:43.616432 | orchestrator | Tuesday 23 September 2025 07:32:43 +0000 (0:00:00.154) 0:00:46.711 ***** 2025-09-23 07:32:43.616443 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:43.616454 | orchestrator | 2025-09-23 07:32:43.616464 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-23 07:32:43.616475 | orchestrator | Tuesday 23 September 2025 07:32:43 +0000 (0:00:00.135) 0:00:46.847 ***** 2025-09-23 07:32:43.616486 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:43.616497 | orchestrator | 2025-09-23 07:32:43.616508 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-23 07:32:43.616518 | orchestrator | Tuesday 23 September 2025 07:32:43 +0000 (0:00:00.137) 0:00:46.985 ***** 2025-09-23 07:32:43.616529 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:43.616540 | orchestrator | 2025-09-23 07:32:43.616551 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-23 07:32:43.616568 | orchestrator | Tuesday 23 September 2025 07:32:43 +0000 (0:00:00.142) 0:00:47.128 ***** 2025-09-23 07:32:48.683902 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:48.683985 | orchestrator | 2025-09-23 07:32:48.684017 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-23 07:32:48.684027 | orchestrator | Tuesday 23 September 2025 07:32:43 +0000 (0:00:00.118) 0:00:47.246 ***** 2025-09-23 07:32:48.684035 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:48.684043 | orchestrator | 2025-09-23 07:32:48.684052 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-23 07:32:48.684060 | orchestrator | Tuesday 23 September 2025 07:32:44 +0000 (0:00:00.383) 0:00:47.630 ***** 2025-09-23 07:32:48.684068 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:48.684076 | orchestrator | 2025-09-23 07:32:48.684084 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-23 07:32:48.684091 | orchestrator | Tuesday 23 September 2025 07:32:44 +0000 (0:00:00.141) 0:00:47.772 ***** 2025-09-23 07:32:48.684099 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:48.684107 | orchestrator | 2025-09-23 07:32:48.684115 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-23 07:32:48.684123 | orchestrator | Tuesday 23 September 2025 07:32:44 +0000 (0:00:00.148) 0:00:47.921 ***** 2025-09-23 07:32:48.684130 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:48.684210 | orchestrator | 2025-09-23 07:32:48.684225 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-23 07:32:48.684235 | orchestrator | Tuesday 23 September 2025 07:32:44 +0000 (0:00:00.155) 0:00:48.077 ***** 2025-09-23 07:32:48.684243 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:48.684251 | orchestrator | 2025-09-23 07:32:48.684258 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-23 07:32:48.684266 | orchestrator | Tuesday 23 September 2025 07:32:44 +0000 (0:00:00.163) 0:00:48.240 ***** 2025-09-23 07:32:48.684273 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:48.684281 | orchestrator | 2025-09-23 07:32:48.684289 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-23 07:32:48.684297 | orchestrator | Tuesday 23 September 2025 07:32:44 +0000 (0:00:00.149) 0:00:48.389 ***** 2025-09-23 07:32:48.684304 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:48.684312 | orchestrator | 2025-09-23 07:32:48.684320 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-23 07:32:48.684327 | orchestrator | Tuesday 23 September 2025 07:32:45 +0000 (0:00:00.134) 0:00:48.524 ***** 2025-09-23 07:32:48.684336 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:48.684349 | orchestrator | 2025-09-23 07:32:48.684362 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-23 07:32:48.684375 | orchestrator | Tuesday 23 September 2025 07:32:45 +0000 (0:00:00.145) 0:00:48.670 ***** 2025-09-23 07:32:48.684389 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:48.684397 | orchestrator | 2025-09-23 07:32:48.684405 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-23 07:32:48.684413 | orchestrator | Tuesday 23 September 2025 07:32:45 +0000 (0:00:00.158) 0:00:48.828 ***** 2025-09-23 07:32:48.684436 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ede7e8c-1177-5738-bf30-f710eefa62dc', 'data_vg': 'ceph-7ede7e8c-1177-5738-bf30-f710eefa62dc'})  2025-09-23 07:32:48.684453 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6b345e42-d385-5c5d-ac31-471707d336a3', 'data_vg': 'ceph-6b345e42-d385-5c5d-ac31-471707d336a3'})  2025-09-23 07:32:48.684467 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:48.684481 | orchestrator | 2025-09-23 07:32:48.684496 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-23 07:32:48.684511 | orchestrator | Tuesday 23 September 2025 07:32:45 +0000 (0:00:00.167) 0:00:48.996 ***** 2025-09-23 07:32:48.684525 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ede7e8c-1177-5738-bf30-f710eefa62dc', 'data_vg': 'ceph-7ede7e8c-1177-5738-bf30-f710eefa62dc'})  2025-09-23 07:32:48.684542 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6b345e42-d385-5c5d-ac31-471707d336a3', 'data_vg': 'ceph-6b345e42-d385-5c5d-ac31-471707d336a3'})  2025-09-23 07:32:48.684566 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:48.684581 | orchestrator | 2025-09-23 07:32:48.684596 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-23 07:32:48.684611 | orchestrator | Tuesday 23 September 2025 07:32:45 +0000 (0:00:00.176) 0:00:49.172 ***** 2025-09-23 07:32:48.684625 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ede7e8c-1177-5738-bf30-f710eefa62dc', 'data_vg': 'ceph-7ede7e8c-1177-5738-bf30-f710eefa62dc'})  2025-09-23 07:32:48.684641 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6b345e42-d385-5c5d-ac31-471707d336a3', 'data_vg': 'ceph-6b345e42-d385-5c5d-ac31-471707d336a3'})  2025-09-23 07:32:48.684656 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:48.684670 | orchestrator | 2025-09-23 07:32:48.684683 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-23 07:32:48.684697 | orchestrator | Tuesday 23 September 2025 07:32:45 +0000 (0:00:00.160) 0:00:49.332 ***** 2025-09-23 07:32:48.684711 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ede7e8c-1177-5738-bf30-f710eefa62dc', 'data_vg': 'ceph-7ede7e8c-1177-5738-bf30-f710eefa62dc'})  2025-09-23 07:32:48.684725 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6b345e42-d385-5c5d-ac31-471707d336a3', 'data_vg': 'ceph-6b345e42-d385-5c5d-ac31-471707d336a3'})  2025-09-23 07:32:48.684738 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:48.684755 | orchestrator | 2025-09-23 07:32:48.684772 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-23 07:32:48.684807 | orchestrator | Tuesday 23 September 2025 07:32:46 +0000 (0:00:00.389) 0:00:49.721 ***** 2025-09-23 07:32:48.684818 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ede7e8c-1177-5738-bf30-f710eefa62dc', 'data_vg': 'ceph-7ede7e8c-1177-5738-bf30-f710eefa62dc'})  2025-09-23 07:32:48.684828 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6b345e42-d385-5c5d-ac31-471707d336a3', 'data_vg': 'ceph-6b345e42-d385-5c5d-ac31-471707d336a3'})  2025-09-23 07:32:48.684837 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:48.684847 | orchestrator | 2025-09-23 07:32:48.684856 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-23 07:32:48.684866 | orchestrator | Tuesday 23 September 2025 07:32:46 +0000 (0:00:00.163) 0:00:49.885 ***** 2025-09-23 07:32:48.684875 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ede7e8c-1177-5738-bf30-f710eefa62dc', 'data_vg': 'ceph-7ede7e8c-1177-5738-bf30-f710eefa62dc'})  2025-09-23 07:32:48.684885 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6b345e42-d385-5c5d-ac31-471707d336a3', 'data_vg': 'ceph-6b345e42-d385-5c5d-ac31-471707d336a3'})  2025-09-23 07:32:48.684894 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:48.684904 | orchestrator | 2025-09-23 07:32:48.684914 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-23 07:32:48.684923 | orchestrator | Tuesday 23 September 2025 07:32:46 +0000 (0:00:00.157) 0:00:50.043 ***** 2025-09-23 07:32:48.684933 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ede7e8c-1177-5738-bf30-f710eefa62dc', 'data_vg': 'ceph-7ede7e8c-1177-5738-bf30-f710eefa62dc'})  2025-09-23 07:32:48.684942 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6b345e42-d385-5c5d-ac31-471707d336a3', 'data_vg': 'ceph-6b345e42-d385-5c5d-ac31-471707d336a3'})  2025-09-23 07:32:48.684952 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:48.684961 | orchestrator | 2025-09-23 07:32:48.684971 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-23 07:32:48.684980 | orchestrator | Tuesday 23 September 2025 07:32:46 +0000 (0:00:00.158) 0:00:50.201 ***** 2025-09-23 07:32:48.684990 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ede7e8c-1177-5738-bf30-f710eefa62dc', 'data_vg': 'ceph-7ede7e8c-1177-5738-bf30-f710eefa62dc'})  2025-09-23 07:32:48.685007 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6b345e42-d385-5c5d-ac31-471707d336a3', 'data_vg': 'ceph-6b345e42-d385-5c5d-ac31-471707d336a3'})  2025-09-23 07:32:48.685017 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:48.685027 | orchestrator | 2025-09-23 07:32:48.685036 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-23 07:32:48.685083 | orchestrator | Tuesday 23 September 2025 07:32:46 +0000 (0:00:00.168) 0:00:50.369 ***** 2025-09-23 07:32:48.685093 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:32:48.685103 | orchestrator | 2025-09-23 07:32:48.685113 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-23 07:32:48.685122 | orchestrator | Tuesday 23 September 2025 07:32:47 +0000 (0:00:00.506) 0:00:50.876 ***** 2025-09-23 07:32:48.685132 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:32:48.685169 | orchestrator | 2025-09-23 07:32:48.685180 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-23 07:32:48.685189 | orchestrator | Tuesday 23 September 2025 07:32:47 +0000 (0:00:00.528) 0:00:51.404 ***** 2025-09-23 07:32:48.685199 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:32:48.685208 | orchestrator | 2025-09-23 07:32:48.685218 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-23 07:32:48.685227 | orchestrator | Tuesday 23 September 2025 07:32:48 +0000 (0:00:00.172) 0:00:51.577 ***** 2025-09-23 07:32:48.685237 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-6b345e42-d385-5c5d-ac31-471707d336a3', 'vg_name': 'ceph-6b345e42-d385-5c5d-ac31-471707d336a3'}) 2025-09-23 07:32:48.685248 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-7ede7e8c-1177-5738-bf30-f710eefa62dc', 'vg_name': 'ceph-7ede7e8c-1177-5738-bf30-f710eefa62dc'}) 2025-09-23 07:32:48.685257 | orchestrator | 2025-09-23 07:32:48.685267 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-23 07:32:48.685276 | orchestrator | Tuesday 23 September 2025 07:32:48 +0000 (0:00:00.219) 0:00:51.796 ***** 2025-09-23 07:32:48.685286 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ede7e8c-1177-5738-bf30-f710eefa62dc', 'data_vg': 'ceph-7ede7e8c-1177-5738-bf30-f710eefa62dc'})  2025-09-23 07:32:48.685295 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6b345e42-d385-5c5d-ac31-471707d336a3', 'data_vg': 'ceph-6b345e42-d385-5c5d-ac31-471707d336a3'})  2025-09-23 07:32:48.685305 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:48.685314 | orchestrator | 2025-09-23 07:32:48.685324 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-23 07:32:48.685333 | orchestrator | Tuesday 23 September 2025 07:32:48 +0000 (0:00:00.227) 0:00:52.023 ***** 2025-09-23 07:32:48.685342 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ede7e8c-1177-5738-bf30-f710eefa62dc', 'data_vg': 'ceph-7ede7e8c-1177-5738-bf30-f710eefa62dc'})  2025-09-23 07:32:48.685352 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6b345e42-d385-5c5d-ac31-471707d336a3', 'data_vg': 'ceph-6b345e42-d385-5c5d-ac31-471707d336a3'})  2025-09-23 07:32:48.685368 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:55.577428 | orchestrator | 2025-09-23 07:32:55.577526 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-23 07:32:55.577541 | orchestrator | Tuesday 23 September 2025 07:32:48 +0000 (0:00:00.171) 0:00:52.195 ***** 2025-09-23 07:32:55.577553 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ede7e8c-1177-5738-bf30-f710eefa62dc', 'data_vg': 'ceph-7ede7e8c-1177-5738-bf30-f710eefa62dc'})  2025-09-23 07:32:55.577566 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6b345e42-d385-5c5d-ac31-471707d336a3', 'data_vg': 'ceph-6b345e42-d385-5c5d-ac31-471707d336a3'})  2025-09-23 07:32:55.577576 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:32:55.577588 | orchestrator | 2025-09-23 07:32:55.577599 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-23 07:32:55.577609 | orchestrator | Tuesday 23 September 2025 07:32:48 +0000 (0:00:00.178) 0:00:52.374 ***** 2025-09-23 07:32:55.577646 | orchestrator | ok: [testbed-node-4] => { 2025-09-23 07:32:55.577656 | orchestrator |  "lvm_report": { 2025-09-23 07:32:55.577668 | orchestrator |  "lv": [ 2025-09-23 07:32:55.577677 | orchestrator |  { 2025-09-23 07:32:55.577684 | orchestrator |  "lv_name": "osd-block-6b345e42-d385-5c5d-ac31-471707d336a3", 2025-09-23 07:32:55.577691 | orchestrator |  "vg_name": "ceph-6b345e42-d385-5c5d-ac31-471707d336a3" 2025-09-23 07:32:55.577698 | orchestrator |  }, 2025-09-23 07:32:55.577705 | orchestrator |  { 2025-09-23 07:32:55.577716 | orchestrator |  "lv_name": "osd-block-7ede7e8c-1177-5738-bf30-f710eefa62dc", 2025-09-23 07:32:55.577726 | orchestrator |  "vg_name": "ceph-7ede7e8c-1177-5738-bf30-f710eefa62dc" 2025-09-23 07:32:55.577736 | orchestrator |  } 2025-09-23 07:32:55.577746 | orchestrator |  ], 2025-09-23 07:32:55.577755 | orchestrator |  "pv": [ 2025-09-23 07:32:55.577765 | orchestrator |  { 2025-09-23 07:32:55.577774 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-23 07:32:55.577785 | orchestrator |  "vg_name": "ceph-7ede7e8c-1177-5738-bf30-f710eefa62dc" 2025-09-23 07:32:55.577795 | orchestrator |  }, 2025-09-23 07:32:55.577805 | orchestrator |  { 2025-09-23 07:32:55.577816 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-23 07:32:55.577826 | orchestrator |  "vg_name": "ceph-6b345e42-d385-5c5d-ac31-471707d336a3" 2025-09-23 07:32:55.577837 | orchestrator |  } 2025-09-23 07:32:55.577847 | orchestrator |  ] 2025-09-23 07:32:55.577857 | orchestrator |  } 2025-09-23 07:32:55.577867 | orchestrator | } 2025-09-23 07:32:55.577877 | orchestrator | 2025-09-23 07:32:55.577887 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-23 07:32:55.577896 | orchestrator | 2025-09-23 07:32:55.577906 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-23 07:32:55.577915 | orchestrator | Tuesday 23 September 2025 07:32:49 +0000 (0:00:00.654) 0:00:53.029 ***** 2025-09-23 07:32:55.577925 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-23 07:32:55.577934 | orchestrator | 2025-09-23 07:32:55.577958 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-23 07:32:55.577969 | orchestrator | Tuesday 23 September 2025 07:32:49 +0000 (0:00:00.342) 0:00:53.371 ***** 2025-09-23 07:32:55.577980 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:32:55.577991 | orchestrator | 2025-09-23 07:32:55.578001 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:55.578012 | orchestrator | Tuesday 23 September 2025 07:32:50 +0000 (0:00:00.274) 0:00:53.645 ***** 2025-09-23 07:32:55.578076 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-23 07:32:55.578088 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-23 07:32:55.578099 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-23 07:32:55.578111 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-23 07:32:55.578123 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-23 07:32:55.578133 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-23 07:32:55.578164 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-23 07:32:55.578174 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-23 07:32:55.578183 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-23 07:32:55.578195 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-23 07:32:55.578205 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-23 07:32:55.578228 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-23 07:32:55.578238 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-23 07:32:55.578249 | orchestrator | 2025-09-23 07:32:55.578260 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:55.578270 | orchestrator | Tuesday 23 September 2025 07:32:50 +0000 (0:00:00.454) 0:00:54.099 ***** 2025-09-23 07:32:55.578281 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:32:55.578297 | orchestrator | 2025-09-23 07:32:55.578307 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:55.578317 | orchestrator | Tuesday 23 September 2025 07:32:50 +0000 (0:00:00.218) 0:00:54.318 ***** 2025-09-23 07:32:55.578328 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:32:55.578338 | orchestrator | 2025-09-23 07:32:55.578347 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:55.578378 | orchestrator | Tuesday 23 September 2025 07:32:51 +0000 (0:00:00.212) 0:00:54.530 ***** 2025-09-23 07:32:55.578392 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:32:55.578405 | orchestrator | 2025-09-23 07:32:55.578417 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:55.578429 | orchestrator | Tuesday 23 September 2025 07:32:51 +0000 (0:00:00.213) 0:00:54.744 ***** 2025-09-23 07:32:55.578440 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:32:55.578450 | orchestrator | 2025-09-23 07:32:55.578459 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:55.578469 | orchestrator | Tuesday 23 September 2025 07:32:51 +0000 (0:00:00.234) 0:00:54.978 ***** 2025-09-23 07:32:55.578480 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:32:55.578490 | orchestrator | 2025-09-23 07:32:55.578500 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:55.578511 | orchestrator | Tuesday 23 September 2025 07:32:51 +0000 (0:00:00.220) 0:00:55.199 ***** 2025-09-23 07:32:55.578521 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:32:55.578531 | orchestrator | 2025-09-23 07:32:55.578542 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:55.578552 | orchestrator | Tuesday 23 September 2025 07:32:52 +0000 (0:00:00.655) 0:00:55.855 ***** 2025-09-23 07:32:55.578562 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:32:55.578573 | orchestrator | 2025-09-23 07:32:55.578583 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:55.578594 | orchestrator | Tuesday 23 September 2025 07:32:52 +0000 (0:00:00.225) 0:00:56.081 ***** 2025-09-23 07:32:55.578604 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:32:55.578615 | orchestrator | 2025-09-23 07:32:55.578626 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:55.578637 | orchestrator | Tuesday 23 September 2025 07:32:52 +0000 (0:00:00.260) 0:00:56.341 ***** 2025-09-23 07:32:55.578648 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269) 2025-09-23 07:32:55.578660 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269) 2025-09-23 07:32:55.578671 | orchestrator | 2025-09-23 07:32:55.578682 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:55.578692 | orchestrator | Tuesday 23 September 2025 07:32:53 +0000 (0:00:00.452) 0:00:56.794 ***** 2025-09-23 07:32:55.578702 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5c88e186-44c4-4f29-a716-3e862e71c173) 2025-09-23 07:32:55.578713 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5c88e186-44c4-4f29-a716-3e862e71c173) 2025-09-23 07:32:55.578723 | orchestrator | 2025-09-23 07:32:55.578733 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:55.578742 | orchestrator | Tuesday 23 September 2025 07:32:53 +0000 (0:00:00.470) 0:00:57.264 ***** 2025-09-23 07:32:55.578767 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b75d5c1f-0301-4e14-8d60-793226b090b6) 2025-09-23 07:32:55.578776 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b75d5c1f-0301-4e14-8d60-793226b090b6) 2025-09-23 07:32:55.578786 | orchestrator | 2025-09-23 07:32:55.578795 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:55.578803 | orchestrator | Tuesday 23 September 2025 07:32:54 +0000 (0:00:00.466) 0:00:57.731 ***** 2025-09-23 07:32:55.578812 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c2ff2f17-feac-486a-a8d3-f5343e47e8fb) 2025-09-23 07:32:55.578821 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c2ff2f17-feac-486a-a8d3-f5343e47e8fb) 2025-09-23 07:32:55.578829 | orchestrator | 2025-09-23 07:32:55.578838 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-23 07:32:55.578847 | orchestrator | Tuesday 23 September 2025 07:32:54 +0000 (0:00:00.491) 0:00:58.222 ***** 2025-09-23 07:32:55.578856 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-23 07:32:55.578865 | orchestrator | 2025-09-23 07:32:55.578874 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:32:55.578883 | orchestrator | Tuesday 23 September 2025 07:32:55 +0000 (0:00:00.393) 0:00:58.616 ***** 2025-09-23 07:32:55.578891 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-23 07:32:55.578900 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-23 07:32:55.578909 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-23 07:32:55.578918 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-23 07:32:55.578927 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-23 07:32:55.578935 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-23 07:32:55.578944 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-23 07:32:55.578953 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-23 07:32:55.578961 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-23 07:32:55.578970 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-23 07:32:55.578979 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-23 07:32:55.578998 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-23 07:33:04.326097 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-23 07:33:04.326202 | orchestrator | 2025-09-23 07:33:04.326215 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:33:04.326235 | orchestrator | Tuesday 23 September 2025 07:32:55 +0000 (0:00:00.463) 0:00:59.080 ***** 2025-09-23 07:33:04.326252 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:04.326262 | orchestrator | 2025-09-23 07:33:04.326272 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:33:04.326280 | orchestrator | Tuesday 23 September 2025 07:32:55 +0000 (0:00:00.215) 0:00:59.295 ***** 2025-09-23 07:33:04.326290 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:04.326298 | orchestrator | 2025-09-23 07:33:04.326307 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:33:04.326316 | orchestrator | Tuesday 23 September 2025 07:32:55 +0000 (0:00:00.224) 0:00:59.520 ***** 2025-09-23 07:33:04.326324 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:04.326333 | orchestrator | 2025-09-23 07:33:04.326341 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:33:04.326370 | orchestrator | Tuesday 23 September 2025 07:32:56 +0000 (0:00:00.691) 0:01:00.212 ***** 2025-09-23 07:33:04.326379 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:04.326387 | orchestrator | 2025-09-23 07:33:04.326396 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:33:04.326405 | orchestrator | Tuesday 23 September 2025 07:32:56 +0000 (0:00:00.216) 0:01:00.428 ***** 2025-09-23 07:33:04.326413 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:04.326422 | orchestrator | 2025-09-23 07:33:04.326431 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:33:04.326439 | orchestrator | Tuesday 23 September 2025 07:32:57 +0000 (0:00:00.219) 0:01:00.648 ***** 2025-09-23 07:33:04.326448 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:04.326456 | orchestrator | 2025-09-23 07:33:04.326465 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:33:04.326474 | orchestrator | Tuesday 23 September 2025 07:32:57 +0000 (0:00:00.197) 0:01:00.845 ***** 2025-09-23 07:33:04.326482 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:04.326491 | orchestrator | 2025-09-23 07:33:04.326499 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:33:04.326508 | orchestrator | Tuesday 23 September 2025 07:32:57 +0000 (0:00:00.229) 0:01:01.075 ***** 2025-09-23 07:33:04.326516 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:04.326525 | orchestrator | 2025-09-23 07:33:04.326533 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:33:04.326542 | orchestrator | Tuesday 23 September 2025 07:32:57 +0000 (0:00:00.214) 0:01:01.289 ***** 2025-09-23 07:33:04.326551 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-23 07:33:04.326559 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-23 07:33:04.326568 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-23 07:33:04.326577 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-23 07:33:04.326585 | orchestrator | 2025-09-23 07:33:04.326594 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:33:04.326603 | orchestrator | Tuesday 23 September 2025 07:32:58 +0000 (0:00:00.640) 0:01:01.929 ***** 2025-09-23 07:33:04.326613 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:04.326624 | orchestrator | 2025-09-23 07:33:04.326634 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:33:04.326644 | orchestrator | Tuesday 23 September 2025 07:32:58 +0000 (0:00:00.196) 0:01:02.126 ***** 2025-09-23 07:33:04.326654 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:04.326664 | orchestrator | 2025-09-23 07:33:04.326675 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:33:04.326685 | orchestrator | Tuesday 23 September 2025 07:32:58 +0000 (0:00:00.255) 0:01:02.381 ***** 2025-09-23 07:33:04.326695 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:04.326705 | orchestrator | 2025-09-23 07:33:04.326715 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-23 07:33:04.326725 | orchestrator | Tuesday 23 September 2025 07:32:59 +0000 (0:00:00.181) 0:01:02.563 ***** 2025-09-23 07:33:04.326734 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:04.326745 | orchestrator | 2025-09-23 07:33:04.326754 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-23 07:33:04.326764 | orchestrator | Tuesday 23 September 2025 07:32:59 +0000 (0:00:00.216) 0:01:02.779 ***** 2025-09-23 07:33:04.326774 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:04.326784 | orchestrator | 2025-09-23 07:33:04.326794 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-23 07:33:04.326804 | orchestrator | Tuesday 23 September 2025 07:32:59 +0000 (0:00:00.270) 0:01:03.049 ***** 2025-09-23 07:33:04.326814 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4a27826e-7697-5dae-8bcf-65313ee63b58'}}) 2025-09-23 07:33:04.326824 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b31a677e-efd4-57fc-b4ad-0e2207d5fa48'}}) 2025-09-23 07:33:04.326840 | orchestrator | 2025-09-23 07:33:04.326850 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-23 07:33:04.326860 | orchestrator | Tuesday 23 September 2025 07:32:59 +0000 (0:00:00.195) 0:01:03.245 ***** 2025-09-23 07:33:04.326870 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4a27826e-7697-5dae-8bcf-65313ee63b58', 'data_vg': 'ceph-4a27826e-7697-5dae-8bcf-65313ee63b58'}) 2025-09-23 07:33:04.326881 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b31a677e-efd4-57fc-b4ad-0e2207d5fa48', 'data_vg': 'ceph-b31a677e-efd4-57fc-b4ad-0e2207d5fa48'}) 2025-09-23 07:33:04.326891 | orchestrator | 2025-09-23 07:33:04.326902 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-23 07:33:04.326925 | orchestrator | Tuesday 23 September 2025 07:33:01 +0000 (0:00:01.823) 0:01:05.069 ***** 2025-09-23 07:33:04.326936 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4a27826e-7697-5dae-8bcf-65313ee63b58', 'data_vg': 'ceph-4a27826e-7697-5dae-8bcf-65313ee63b58'})  2025-09-23 07:33:04.326947 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b31a677e-efd4-57fc-b4ad-0e2207d5fa48', 'data_vg': 'ceph-b31a677e-efd4-57fc-b4ad-0e2207d5fa48'})  2025-09-23 07:33:04.326957 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:04.326967 | orchestrator | 2025-09-23 07:33:04.326976 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-23 07:33:04.326984 | orchestrator | Tuesday 23 September 2025 07:33:01 +0000 (0:00:00.131) 0:01:05.201 ***** 2025-09-23 07:33:04.326993 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4a27826e-7697-5dae-8bcf-65313ee63b58', 'data_vg': 'ceph-4a27826e-7697-5dae-8bcf-65313ee63b58'}) 2025-09-23 07:33:04.327014 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b31a677e-efd4-57fc-b4ad-0e2207d5fa48', 'data_vg': 'ceph-b31a677e-efd4-57fc-b4ad-0e2207d5fa48'}) 2025-09-23 07:33:04.327024 | orchestrator | 2025-09-23 07:33:04.327033 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-23 07:33:04.327041 | orchestrator | Tuesday 23 September 2025 07:33:02 +0000 (0:00:01.249) 0:01:06.450 ***** 2025-09-23 07:33:04.327050 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4a27826e-7697-5dae-8bcf-65313ee63b58', 'data_vg': 'ceph-4a27826e-7697-5dae-8bcf-65313ee63b58'})  2025-09-23 07:33:04.327059 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b31a677e-efd4-57fc-b4ad-0e2207d5fa48', 'data_vg': 'ceph-b31a677e-efd4-57fc-b4ad-0e2207d5fa48'})  2025-09-23 07:33:04.327067 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:04.327076 | orchestrator | 2025-09-23 07:33:04.327085 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-23 07:33:04.327093 | orchestrator | Tuesday 23 September 2025 07:33:03 +0000 (0:00:00.145) 0:01:06.596 ***** 2025-09-23 07:33:04.327102 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:04.327110 | orchestrator | 2025-09-23 07:33:04.327119 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-23 07:33:04.327127 | orchestrator | Tuesday 23 September 2025 07:33:03 +0000 (0:00:00.138) 0:01:06.734 ***** 2025-09-23 07:33:04.327152 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4a27826e-7697-5dae-8bcf-65313ee63b58', 'data_vg': 'ceph-4a27826e-7697-5dae-8bcf-65313ee63b58'})  2025-09-23 07:33:04.327166 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b31a677e-efd4-57fc-b4ad-0e2207d5fa48', 'data_vg': 'ceph-b31a677e-efd4-57fc-b4ad-0e2207d5fa48'})  2025-09-23 07:33:04.327175 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:04.327184 | orchestrator | 2025-09-23 07:33:04.327193 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-23 07:33:04.327201 | orchestrator | Tuesday 23 September 2025 07:33:03 +0000 (0:00:00.145) 0:01:06.880 ***** 2025-09-23 07:33:04.327210 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:04.327224 | orchestrator | 2025-09-23 07:33:04.327232 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-23 07:33:04.327241 | orchestrator | Tuesday 23 September 2025 07:33:03 +0000 (0:00:00.140) 0:01:07.020 ***** 2025-09-23 07:33:04.327249 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4a27826e-7697-5dae-8bcf-65313ee63b58', 'data_vg': 'ceph-4a27826e-7697-5dae-8bcf-65313ee63b58'})  2025-09-23 07:33:04.327258 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b31a677e-efd4-57fc-b4ad-0e2207d5fa48', 'data_vg': 'ceph-b31a677e-efd4-57fc-b4ad-0e2207d5fa48'})  2025-09-23 07:33:04.327267 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:04.327275 | orchestrator | 2025-09-23 07:33:04.327284 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-23 07:33:04.327293 | orchestrator | Tuesday 23 September 2025 07:33:03 +0000 (0:00:00.134) 0:01:07.155 ***** 2025-09-23 07:33:04.327301 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:04.327310 | orchestrator | 2025-09-23 07:33:04.327318 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-23 07:33:04.327327 | orchestrator | Tuesday 23 September 2025 07:33:03 +0000 (0:00:00.125) 0:01:07.281 ***** 2025-09-23 07:33:04.327336 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4a27826e-7697-5dae-8bcf-65313ee63b58', 'data_vg': 'ceph-4a27826e-7697-5dae-8bcf-65313ee63b58'})  2025-09-23 07:33:04.327344 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b31a677e-efd4-57fc-b4ad-0e2207d5fa48', 'data_vg': 'ceph-b31a677e-efd4-57fc-b4ad-0e2207d5fa48'})  2025-09-23 07:33:04.327353 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:04.327361 | orchestrator | 2025-09-23 07:33:04.327370 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-23 07:33:04.327378 | orchestrator | Tuesday 23 September 2025 07:33:03 +0000 (0:00:00.146) 0:01:07.428 ***** 2025-09-23 07:33:04.327387 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:33:04.327396 | orchestrator | 2025-09-23 07:33:04.327404 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-23 07:33:04.327413 | orchestrator | Tuesday 23 September 2025 07:33:04 +0000 (0:00:00.270) 0:01:07.699 ***** 2025-09-23 07:33:04.327427 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4a27826e-7697-5dae-8bcf-65313ee63b58', 'data_vg': 'ceph-4a27826e-7697-5dae-8bcf-65313ee63b58'})  2025-09-23 07:33:10.557338 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b31a677e-efd4-57fc-b4ad-0e2207d5fa48', 'data_vg': 'ceph-b31a677e-efd4-57fc-b4ad-0e2207d5fa48'})  2025-09-23 07:33:10.557437 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:10.557451 | orchestrator | 2025-09-23 07:33:10.557463 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-23 07:33:10.557475 | orchestrator | Tuesday 23 September 2025 07:33:04 +0000 (0:00:00.142) 0:01:07.841 ***** 2025-09-23 07:33:10.557485 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4a27826e-7697-5dae-8bcf-65313ee63b58', 'data_vg': 'ceph-4a27826e-7697-5dae-8bcf-65313ee63b58'})  2025-09-23 07:33:10.557495 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b31a677e-efd4-57fc-b4ad-0e2207d5fa48', 'data_vg': 'ceph-b31a677e-efd4-57fc-b4ad-0e2207d5fa48'})  2025-09-23 07:33:10.557505 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:10.557515 | orchestrator | 2025-09-23 07:33:10.557526 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-23 07:33:10.557536 | orchestrator | Tuesday 23 September 2025 07:33:04 +0000 (0:00:00.131) 0:01:07.973 ***** 2025-09-23 07:33:10.557545 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4a27826e-7697-5dae-8bcf-65313ee63b58', 'data_vg': 'ceph-4a27826e-7697-5dae-8bcf-65313ee63b58'})  2025-09-23 07:33:10.557555 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b31a677e-efd4-57fc-b4ad-0e2207d5fa48', 'data_vg': 'ceph-b31a677e-efd4-57fc-b4ad-0e2207d5fa48'})  2025-09-23 07:33:10.557564 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:10.557597 | orchestrator | 2025-09-23 07:33:10.557607 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-23 07:33:10.557617 | orchestrator | Tuesday 23 September 2025 07:33:04 +0000 (0:00:00.150) 0:01:08.124 ***** 2025-09-23 07:33:10.557626 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:10.557636 | orchestrator | 2025-09-23 07:33:10.557645 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-23 07:33:10.557654 | orchestrator | Tuesday 23 September 2025 07:33:04 +0000 (0:00:00.138) 0:01:08.262 ***** 2025-09-23 07:33:10.557663 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:10.557673 | orchestrator | 2025-09-23 07:33:10.557682 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-23 07:33:10.557691 | orchestrator | Tuesday 23 September 2025 07:33:04 +0000 (0:00:00.144) 0:01:08.407 ***** 2025-09-23 07:33:10.557701 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:10.557710 | orchestrator | 2025-09-23 07:33:10.557720 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-23 07:33:10.557744 | orchestrator | Tuesday 23 September 2025 07:33:05 +0000 (0:00:00.139) 0:01:08.546 ***** 2025-09-23 07:33:10.557754 | orchestrator | ok: [testbed-node-5] => { 2025-09-23 07:33:10.557764 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-23 07:33:10.557773 | orchestrator | } 2025-09-23 07:33:10.557783 | orchestrator | 2025-09-23 07:33:10.557793 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-23 07:33:10.557802 | orchestrator | Tuesday 23 September 2025 07:33:05 +0000 (0:00:00.163) 0:01:08.709 ***** 2025-09-23 07:33:10.557811 | orchestrator | ok: [testbed-node-5] => { 2025-09-23 07:33:10.557821 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-23 07:33:10.557830 | orchestrator | } 2025-09-23 07:33:10.557839 | orchestrator | 2025-09-23 07:33:10.557849 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-23 07:33:10.557859 | orchestrator | Tuesday 23 September 2025 07:33:05 +0000 (0:00:00.146) 0:01:08.856 ***** 2025-09-23 07:33:10.557869 | orchestrator | ok: [testbed-node-5] => { 2025-09-23 07:33:10.557878 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-23 07:33:10.557888 | orchestrator | } 2025-09-23 07:33:10.557897 | orchestrator | 2025-09-23 07:33:10.557907 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-23 07:33:10.557916 | orchestrator | Tuesday 23 September 2025 07:33:05 +0000 (0:00:00.144) 0:01:09.000 ***** 2025-09-23 07:33:10.557925 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:33:10.557935 | orchestrator | 2025-09-23 07:33:10.557944 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-23 07:33:10.557954 | orchestrator | Tuesday 23 September 2025 07:33:05 +0000 (0:00:00.486) 0:01:09.487 ***** 2025-09-23 07:33:10.557963 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:33:10.557972 | orchestrator | 2025-09-23 07:33:10.557982 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-23 07:33:10.557991 | orchestrator | Tuesday 23 September 2025 07:33:06 +0000 (0:00:00.483) 0:01:09.970 ***** 2025-09-23 07:33:10.558000 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:33:10.558010 | orchestrator | 2025-09-23 07:33:10.558079 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-23 07:33:10.558089 | orchestrator | Tuesday 23 September 2025 07:33:07 +0000 (0:00:00.708) 0:01:10.678 ***** 2025-09-23 07:33:10.558098 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:33:10.558108 | orchestrator | 2025-09-23 07:33:10.558117 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-23 07:33:10.558127 | orchestrator | Tuesday 23 September 2025 07:33:07 +0000 (0:00:00.150) 0:01:10.829 ***** 2025-09-23 07:33:10.558157 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:10.558196 | orchestrator | 2025-09-23 07:33:10.558207 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-23 07:33:10.558217 | orchestrator | Tuesday 23 September 2025 07:33:07 +0000 (0:00:00.119) 0:01:10.948 ***** 2025-09-23 07:33:10.558236 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:10.558245 | orchestrator | 2025-09-23 07:33:10.558255 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-23 07:33:10.558265 | orchestrator | Tuesday 23 September 2025 07:33:07 +0000 (0:00:00.118) 0:01:11.066 ***** 2025-09-23 07:33:10.558274 | orchestrator | ok: [testbed-node-5] => { 2025-09-23 07:33:10.558284 | orchestrator |  "vgs_report": { 2025-09-23 07:33:10.558294 | orchestrator |  "vg": [] 2025-09-23 07:33:10.558319 | orchestrator |  } 2025-09-23 07:33:10.558330 | orchestrator | } 2025-09-23 07:33:10.558339 | orchestrator | 2025-09-23 07:33:10.558349 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-23 07:33:10.558358 | orchestrator | Tuesday 23 September 2025 07:33:07 +0000 (0:00:00.146) 0:01:11.213 ***** 2025-09-23 07:33:10.558368 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:10.558377 | orchestrator | 2025-09-23 07:33:10.558387 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-23 07:33:10.558396 | orchestrator | Tuesday 23 September 2025 07:33:07 +0000 (0:00:00.137) 0:01:11.351 ***** 2025-09-23 07:33:10.558406 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:10.558415 | orchestrator | 2025-09-23 07:33:10.558425 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-23 07:33:10.558434 | orchestrator | Tuesday 23 September 2025 07:33:07 +0000 (0:00:00.143) 0:01:11.495 ***** 2025-09-23 07:33:10.558444 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:10.558453 | orchestrator | 2025-09-23 07:33:10.558463 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-23 07:33:10.558472 | orchestrator | Tuesday 23 September 2025 07:33:08 +0000 (0:00:00.151) 0:01:11.647 ***** 2025-09-23 07:33:10.558482 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:10.558492 | orchestrator | 2025-09-23 07:33:10.558501 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-23 07:33:10.558510 | orchestrator | Tuesday 23 September 2025 07:33:08 +0000 (0:00:00.150) 0:01:11.797 ***** 2025-09-23 07:33:10.558520 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:10.558529 | orchestrator | 2025-09-23 07:33:10.558539 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-23 07:33:10.558548 | orchestrator | Tuesday 23 September 2025 07:33:08 +0000 (0:00:00.152) 0:01:11.949 ***** 2025-09-23 07:33:10.558558 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:10.558567 | orchestrator | 2025-09-23 07:33:10.558577 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-23 07:33:10.558586 | orchestrator | Tuesday 23 September 2025 07:33:08 +0000 (0:00:00.138) 0:01:12.088 ***** 2025-09-23 07:33:10.558596 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:10.558605 | orchestrator | 2025-09-23 07:33:10.558615 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-23 07:33:10.558624 | orchestrator | Tuesday 23 September 2025 07:33:08 +0000 (0:00:00.149) 0:01:12.238 ***** 2025-09-23 07:33:10.558634 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:10.558643 | orchestrator | 2025-09-23 07:33:10.558653 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-23 07:33:10.558662 | orchestrator | Tuesday 23 September 2025 07:33:08 +0000 (0:00:00.144) 0:01:12.382 ***** 2025-09-23 07:33:10.558672 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:10.558681 | orchestrator | 2025-09-23 07:33:10.558691 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-23 07:33:10.558706 | orchestrator | Tuesday 23 September 2025 07:33:09 +0000 (0:00:00.369) 0:01:12.752 ***** 2025-09-23 07:33:10.558715 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:10.558725 | orchestrator | 2025-09-23 07:33:10.558735 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-23 07:33:10.558744 | orchestrator | Tuesday 23 September 2025 07:33:09 +0000 (0:00:00.181) 0:01:12.934 ***** 2025-09-23 07:33:10.558753 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:10.558778 | orchestrator | 2025-09-23 07:33:10.558787 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-23 07:33:10.558797 | orchestrator | Tuesday 23 September 2025 07:33:09 +0000 (0:00:00.166) 0:01:13.101 ***** 2025-09-23 07:33:10.558806 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:10.558816 | orchestrator | 2025-09-23 07:33:10.558826 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-23 07:33:10.558835 | orchestrator | Tuesday 23 September 2025 07:33:09 +0000 (0:00:00.152) 0:01:13.253 ***** 2025-09-23 07:33:10.558845 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:10.558854 | orchestrator | 2025-09-23 07:33:10.558864 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-23 07:33:10.558873 | orchestrator | Tuesday 23 September 2025 07:33:09 +0000 (0:00:00.152) 0:01:13.406 ***** 2025-09-23 07:33:10.558883 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:10.558892 | orchestrator | 2025-09-23 07:33:10.558902 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-23 07:33:10.558911 | orchestrator | Tuesday 23 September 2025 07:33:10 +0000 (0:00:00.161) 0:01:13.568 ***** 2025-09-23 07:33:10.558921 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4a27826e-7697-5dae-8bcf-65313ee63b58', 'data_vg': 'ceph-4a27826e-7697-5dae-8bcf-65313ee63b58'})  2025-09-23 07:33:10.558930 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b31a677e-efd4-57fc-b4ad-0e2207d5fa48', 'data_vg': 'ceph-b31a677e-efd4-57fc-b4ad-0e2207d5fa48'})  2025-09-23 07:33:10.558940 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:10.558949 | orchestrator | 2025-09-23 07:33:10.558959 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-23 07:33:10.558968 | orchestrator | Tuesday 23 September 2025 07:33:10 +0000 (0:00:00.188) 0:01:13.756 ***** 2025-09-23 07:33:10.558978 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4a27826e-7697-5dae-8bcf-65313ee63b58', 'data_vg': 'ceph-4a27826e-7697-5dae-8bcf-65313ee63b58'})  2025-09-23 07:33:10.558987 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b31a677e-efd4-57fc-b4ad-0e2207d5fa48', 'data_vg': 'ceph-b31a677e-efd4-57fc-b4ad-0e2207d5fa48'})  2025-09-23 07:33:10.558997 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:10.559006 | orchestrator | 2025-09-23 07:33:10.559016 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-23 07:33:10.559025 | orchestrator | Tuesday 23 September 2025 07:33:10 +0000 (0:00:00.148) 0:01:13.905 ***** 2025-09-23 07:33:10.559040 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4a27826e-7697-5dae-8bcf-65313ee63b58', 'data_vg': 'ceph-4a27826e-7697-5dae-8bcf-65313ee63b58'})  2025-09-23 07:33:13.613298 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b31a677e-efd4-57fc-b4ad-0e2207d5fa48', 'data_vg': 'ceph-b31a677e-efd4-57fc-b4ad-0e2207d5fa48'})  2025-09-23 07:33:13.613378 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:13.613389 | orchestrator | 2025-09-23 07:33:13.613397 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-23 07:33:13.613404 | orchestrator | Tuesday 23 September 2025 07:33:10 +0000 (0:00:00.167) 0:01:14.072 ***** 2025-09-23 07:33:13.613411 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4a27826e-7697-5dae-8bcf-65313ee63b58', 'data_vg': 'ceph-4a27826e-7697-5dae-8bcf-65313ee63b58'})  2025-09-23 07:33:13.613417 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b31a677e-efd4-57fc-b4ad-0e2207d5fa48', 'data_vg': 'ceph-b31a677e-efd4-57fc-b4ad-0e2207d5fa48'})  2025-09-23 07:33:13.613424 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:13.613430 | orchestrator | 2025-09-23 07:33:13.613436 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-23 07:33:13.613442 | orchestrator | Tuesday 23 September 2025 07:33:10 +0000 (0:00:00.158) 0:01:14.231 ***** 2025-09-23 07:33:13.613449 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4a27826e-7697-5dae-8bcf-65313ee63b58', 'data_vg': 'ceph-4a27826e-7697-5dae-8bcf-65313ee63b58'})  2025-09-23 07:33:13.613472 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b31a677e-efd4-57fc-b4ad-0e2207d5fa48', 'data_vg': 'ceph-b31a677e-efd4-57fc-b4ad-0e2207d5fa48'})  2025-09-23 07:33:13.613479 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:13.613485 | orchestrator | 2025-09-23 07:33:13.613491 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-23 07:33:13.613497 | orchestrator | Tuesday 23 September 2025 07:33:10 +0000 (0:00:00.183) 0:01:14.414 ***** 2025-09-23 07:33:13.613503 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4a27826e-7697-5dae-8bcf-65313ee63b58', 'data_vg': 'ceph-4a27826e-7697-5dae-8bcf-65313ee63b58'})  2025-09-23 07:33:13.613510 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b31a677e-efd4-57fc-b4ad-0e2207d5fa48', 'data_vg': 'ceph-b31a677e-efd4-57fc-b4ad-0e2207d5fa48'})  2025-09-23 07:33:13.613516 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:13.613522 | orchestrator | 2025-09-23 07:33:13.613528 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-23 07:33:13.613535 | orchestrator | Tuesday 23 September 2025 07:33:11 +0000 (0:00:00.143) 0:01:14.558 ***** 2025-09-23 07:33:13.613541 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4a27826e-7697-5dae-8bcf-65313ee63b58', 'data_vg': 'ceph-4a27826e-7697-5dae-8bcf-65313ee63b58'})  2025-09-23 07:33:13.613547 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b31a677e-efd4-57fc-b4ad-0e2207d5fa48', 'data_vg': 'ceph-b31a677e-efd4-57fc-b4ad-0e2207d5fa48'})  2025-09-23 07:33:13.613553 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:13.613560 | orchestrator | 2025-09-23 07:33:13.613566 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-23 07:33:13.613572 | orchestrator | Tuesday 23 September 2025 07:33:11 +0000 (0:00:00.396) 0:01:14.955 ***** 2025-09-23 07:33:13.613579 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4a27826e-7697-5dae-8bcf-65313ee63b58', 'data_vg': 'ceph-4a27826e-7697-5dae-8bcf-65313ee63b58'})  2025-09-23 07:33:13.613585 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b31a677e-efd4-57fc-b4ad-0e2207d5fa48', 'data_vg': 'ceph-b31a677e-efd4-57fc-b4ad-0e2207d5fa48'})  2025-09-23 07:33:13.613591 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:13.613597 | orchestrator | 2025-09-23 07:33:13.613603 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-23 07:33:13.613609 | orchestrator | Tuesday 23 September 2025 07:33:11 +0000 (0:00:00.164) 0:01:15.119 ***** 2025-09-23 07:33:13.613616 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:33:13.613622 | orchestrator | 2025-09-23 07:33:13.613628 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-23 07:33:13.613634 | orchestrator | Tuesday 23 September 2025 07:33:12 +0000 (0:00:00.489) 0:01:15.608 ***** 2025-09-23 07:33:13.613641 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:33:13.613647 | orchestrator | 2025-09-23 07:33:13.613653 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-23 07:33:13.613659 | orchestrator | Tuesday 23 September 2025 07:33:12 +0000 (0:00:00.509) 0:01:16.118 ***** 2025-09-23 07:33:13.613665 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:33:13.613671 | orchestrator | 2025-09-23 07:33:13.613677 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-23 07:33:13.613683 | orchestrator | Tuesday 23 September 2025 07:33:12 +0000 (0:00:00.157) 0:01:16.275 ***** 2025-09-23 07:33:13.613689 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-4a27826e-7697-5dae-8bcf-65313ee63b58', 'vg_name': 'ceph-4a27826e-7697-5dae-8bcf-65313ee63b58'}) 2025-09-23 07:33:13.613697 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-b31a677e-efd4-57fc-b4ad-0e2207d5fa48', 'vg_name': 'ceph-b31a677e-efd4-57fc-b4ad-0e2207d5fa48'}) 2025-09-23 07:33:13.613703 | orchestrator | 2025-09-23 07:33:13.613709 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-23 07:33:13.613722 | orchestrator | Tuesday 23 September 2025 07:33:12 +0000 (0:00:00.181) 0:01:16.456 ***** 2025-09-23 07:33:13.613740 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4a27826e-7697-5dae-8bcf-65313ee63b58', 'data_vg': 'ceph-4a27826e-7697-5dae-8bcf-65313ee63b58'})  2025-09-23 07:33:13.613747 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b31a677e-efd4-57fc-b4ad-0e2207d5fa48', 'data_vg': 'ceph-b31a677e-efd4-57fc-b4ad-0e2207d5fa48'})  2025-09-23 07:33:13.613755 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:13.613765 | orchestrator | 2025-09-23 07:33:13.613776 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-23 07:33:13.613785 | orchestrator | Tuesday 23 September 2025 07:33:13 +0000 (0:00:00.169) 0:01:16.626 ***** 2025-09-23 07:33:13.613795 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4a27826e-7697-5dae-8bcf-65313ee63b58', 'data_vg': 'ceph-4a27826e-7697-5dae-8bcf-65313ee63b58'})  2025-09-23 07:33:13.613805 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b31a677e-efd4-57fc-b4ad-0e2207d5fa48', 'data_vg': 'ceph-b31a677e-efd4-57fc-b4ad-0e2207d5fa48'})  2025-09-23 07:33:13.613815 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:13.613825 | orchestrator | 2025-09-23 07:33:13.613836 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-23 07:33:13.613846 | orchestrator | Tuesday 23 September 2025 07:33:13 +0000 (0:00:00.156) 0:01:16.782 ***** 2025-09-23 07:33:13.613858 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4a27826e-7697-5dae-8bcf-65313ee63b58', 'data_vg': 'ceph-4a27826e-7697-5dae-8bcf-65313ee63b58'})  2025-09-23 07:33:13.613887 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b31a677e-efd4-57fc-b4ad-0e2207d5fa48', 'data_vg': 'ceph-b31a677e-efd4-57fc-b4ad-0e2207d5fa48'})  2025-09-23 07:33:13.613900 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:13.613908 | orchestrator | 2025-09-23 07:33:13.613915 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-23 07:33:13.613922 | orchestrator | Tuesday 23 September 2025 07:33:13 +0000 (0:00:00.156) 0:01:16.939 ***** 2025-09-23 07:33:13.613929 | orchestrator | ok: [testbed-node-5] => { 2025-09-23 07:33:13.613936 | orchestrator |  "lvm_report": { 2025-09-23 07:33:13.613944 | orchestrator |  "lv": [ 2025-09-23 07:33:13.613950 | orchestrator |  { 2025-09-23 07:33:13.613958 | orchestrator |  "lv_name": "osd-block-4a27826e-7697-5dae-8bcf-65313ee63b58", 2025-09-23 07:33:13.613969 | orchestrator |  "vg_name": "ceph-4a27826e-7697-5dae-8bcf-65313ee63b58" 2025-09-23 07:33:13.613976 | orchestrator |  }, 2025-09-23 07:33:13.613984 | orchestrator |  { 2025-09-23 07:33:13.613991 | orchestrator |  "lv_name": "osd-block-b31a677e-efd4-57fc-b4ad-0e2207d5fa48", 2025-09-23 07:33:13.613998 | orchestrator |  "vg_name": "ceph-b31a677e-efd4-57fc-b4ad-0e2207d5fa48" 2025-09-23 07:33:13.614005 | orchestrator |  } 2025-09-23 07:33:13.614011 | orchestrator |  ], 2025-09-23 07:33:13.614087 | orchestrator |  "pv": [ 2025-09-23 07:33:13.614094 | orchestrator |  { 2025-09-23 07:33:13.614101 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-23 07:33:13.614108 | orchestrator |  "vg_name": "ceph-4a27826e-7697-5dae-8bcf-65313ee63b58" 2025-09-23 07:33:13.614115 | orchestrator |  }, 2025-09-23 07:33:13.614122 | orchestrator |  { 2025-09-23 07:33:13.614128 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-23 07:33:13.614162 | orchestrator |  "vg_name": "ceph-b31a677e-efd4-57fc-b4ad-0e2207d5fa48" 2025-09-23 07:33:13.614170 | orchestrator |  } 2025-09-23 07:33:13.614176 | orchestrator |  ] 2025-09-23 07:33:13.614183 | orchestrator |  } 2025-09-23 07:33:13.614190 | orchestrator | } 2025-09-23 07:33:13.614197 | orchestrator | 2025-09-23 07:33:13.614204 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:33:13.614218 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-23 07:33:13.614225 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-23 07:33:13.614231 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-23 07:33:13.614237 | orchestrator | 2025-09-23 07:33:13.614243 | orchestrator | 2025-09-23 07:33:13.614249 | orchestrator | 2025-09-23 07:33:13.614255 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:33:13.614262 | orchestrator | Tuesday 23 September 2025 07:33:13 +0000 (0:00:00.156) 0:01:17.096 ***** 2025-09-23 07:33:13.614268 | orchestrator | =============================================================================== 2025-09-23 07:33:13.614274 | orchestrator | Create block VGs -------------------------------------------------------- 5.77s 2025-09-23 07:33:13.614280 | orchestrator | Create block LVs -------------------------------------------------------- 5.00s 2025-09-23 07:33:13.614286 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.87s 2025-09-23 07:33:13.614292 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.80s 2025-09-23 07:33:13.614298 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.57s 2025-09-23 07:33:13.614304 | orchestrator | Add known partitions to the list of available block devices ------------- 1.57s 2025-09-23 07:33:13.614310 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.50s 2025-09-23 07:33:13.614316 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.50s 2025-09-23 07:33:13.614329 | orchestrator | Add known partitions to the list of available block devices ------------- 1.47s 2025-09-23 07:33:13.993602 | orchestrator | Add known links to the list of available block devices ------------------ 1.28s 2025-09-23 07:33:13.993679 | orchestrator | Add known partitions to the list of available block devices ------------- 1.20s 2025-09-23 07:33:13.993686 | orchestrator | Print LVM report data --------------------------------------------------- 1.20s 2025-09-23 07:33:13.993692 | orchestrator | Add known links to the list of available block devices ------------------ 1.00s 2025-09-23 07:33:13.993698 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.94s 2025-09-23 07:33:13.993703 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.92s 2025-09-23 07:33:13.993709 | orchestrator | Get initial list of available block devices ----------------------------- 0.85s 2025-09-23 07:33:13.993714 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.73s 2025-09-23 07:33:13.993720 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.71s 2025-09-23 07:33:13.993725 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.71s 2025-09-23 07:33:13.993730 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2025-09-23 07:33:26.437565 | orchestrator | 2025-09-23 07:33:26 | INFO  | Task 90612c6a-004e-4af8-bd16-7c7069181b2d (facts) was prepared for execution. 2025-09-23 07:33:26.437652 | orchestrator | 2025-09-23 07:33:26 | INFO  | It takes a moment until task 90612c6a-004e-4af8-bd16-7c7069181b2d (facts) has been started and output is visible here. 2025-09-23 07:33:38.420756 | orchestrator | 2025-09-23 07:33:38.420865 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-23 07:33:38.420881 | orchestrator | 2025-09-23 07:33:38.420892 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-23 07:33:38.420903 | orchestrator | Tuesday 23 September 2025 07:33:30 +0000 (0:00:00.278) 0:00:00.278 ***** 2025-09-23 07:33:38.420913 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:33:38.420924 | orchestrator | ok: [testbed-manager] 2025-09-23 07:33:38.420956 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:33:38.420967 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:33:38.420976 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:33:38.420986 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:33:38.420996 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:33:38.421006 | orchestrator | 2025-09-23 07:33:38.421017 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-23 07:33:38.421027 | orchestrator | Tuesday 23 September 2025 07:33:31 +0000 (0:00:01.063) 0:00:01.342 ***** 2025-09-23 07:33:38.421050 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:33:38.421061 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:33:38.421073 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:33:38.421083 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:33:38.421093 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:33:38.421103 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:33:38.421113 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:38.421122 | orchestrator | 2025-09-23 07:33:38.421176 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-23 07:33:38.421186 | orchestrator | 2025-09-23 07:33:38.421196 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-23 07:33:38.421205 | orchestrator | Tuesday 23 September 2025 07:33:32 +0000 (0:00:01.255) 0:00:02.598 ***** 2025-09-23 07:33:38.421215 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:33:38.421224 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:33:38.421234 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:33:38.421244 | orchestrator | ok: [testbed-manager] 2025-09-23 07:33:38.421253 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:33:38.421263 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:33:38.421273 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:33:38.421282 | orchestrator | 2025-09-23 07:33:38.421292 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-23 07:33:38.421301 | orchestrator | 2025-09-23 07:33:38.421311 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-23 07:33:38.421320 | orchestrator | Tuesday 23 September 2025 07:33:37 +0000 (0:00:04.543) 0:00:07.141 ***** 2025-09-23 07:33:38.421330 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:33:38.421339 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:33:38.421349 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:33:38.421358 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:33:38.421368 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:33:38.421377 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:33:38.421387 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:33:38.421396 | orchestrator | 2025-09-23 07:33:38.421406 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:33:38.421416 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 07:33:38.421426 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 07:33:38.421436 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 07:33:38.421445 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 07:33:38.421459 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 07:33:38.421479 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 07:33:38.421504 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 07:33:38.421531 | orchestrator | 2025-09-23 07:33:38.421549 | orchestrator | 2025-09-23 07:33:38.421566 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:33:38.421582 | orchestrator | Tuesday 23 September 2025 07:33:38 +0000 (0:00:00.569) 0:00:07.711 ***** 2025-09-23 07:33:38.421599 | orchestrator | =============================================================================== 2025-09-23 07:33:38.421609 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.54s 2025-09-23 07:33:38.421619 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.26s 2025-09-23 07:33:38.421628 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.06s 2025-09-23 07:33:38.421638 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.57s 2025-09-23 07:33:50.890469 | orchestrator | 2025-09-23 07:33:50 | INFO  | Task ec7aa05a-6329-4e7e-ac51-b1df88e2f061 (frr) was prepared for execution. 2025-09-23 07:33:50.895318 | orchestrator | 2025-09-23 07:33:50 | INFO  | It takes a moment until task ec7aa05a-6329-4e7e-ac51-b1df88e2f061 (frr) has been started and output is visible here. 2025-09-23 07:34:15.535031 | orchestrator | 2025-09-23 07:34:15.535093 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-09-23 07:34:15.535107 | orchestrator | 2025-09-23 07:34:15.535160 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-09-23 07:34:15.535173 | orchestrator | Tuesday 23 September 2025 07:33:54 +0000 (0:00:00.239) 0:00:00.239 ***** 2025-09-23 07:34:15.535185 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-09-23 07:34:15.535197 | orchestrator | 2025-09-23 07:34:15.535208 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-09-23 07:34:15.535219 | orchestrator | Tuesday 23 September 2025 07:33:54 +0000 (0:00:00.220) 0:00:00.459 ***** 2025-09-23 07:34:15.535230 | orchestrator | changed: [testbed-manager] 2025-09-23 07:34:15.535241 | orchestrator | 2025-09-23 07:34:15.535252 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-09-23 07:34:15.535263 | orchestrator | Tuesday 23 September 2025 07:33:56 +0000 (0:00:01.169) 0:00:01.629 ***** 2025-09-23 07:34:15.535274 | orchestrator | changed: [testbed-manager] 2025-09-23 07:34:15.535284 | orchestrator | 2025-09-23 07:34:15.535309 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-09-23 07:34:15.535320 | orchestrator | Tuesday 23 September 2025 07:34:05 +0000 (0:00:09.723) 0:00:11.352 ***** 2025-09-23 07:34:15.535331 | orchestrator | ok: [testbed-manager] 2025-09-23 07:34:15.535343 | orchestrator | 2025-09-23 07:34:15.535353 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-09-23 07:34:15.535364 | orchestrator | Tuesday 23 September 2025 07:34:07 +0000 (0:00:01.317) 0:00:12.670 ***** 2025-09-23 07:34:15.535375 | orchestrator | changed: [testbed-manager] 2025-09-23 07:34:15.535386 | orchestrator | 2025-09-23 07:34:15.535396 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-09-23 07:34:15.535407 | orchestrator | Tuesday 23 September 2025 07:34:08 +0000 (0:00:00.895) 0:00:13.566 ***** 2025-09-23 07:34:15.535418 | orchestrator | ok: [testbed-manager] 2025-09-23 07:34:15.535429 | orchestrator | 2025-09-23 07:34:15.535439 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-09-23 07:34:15.535450 | orchestrator | Tuesday 23 September 2025 07:34:09 +0000 (0:00:01.106) 0:00:14.673 ***** 2025-09-23 07:34:15.535461 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-23 07:34:15.535472 | orchestrator | 2025-09-23 07:34:15.535483 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-09-23 07:34:15.535493 | orchestrator | Tuesday 23 September 2025 07:34:09 +0000 (0:00:00.740) 0:00:15.413 ***** 2025-09-23 07:34:15.535504 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:34:15.535515 | orchestrator | 2025-09-23 07:34:15.535526 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-09-23 07:34:15.535557 | orchestrator | Tuesday 23 September 2025 07:34:09 +0000 (0:00:00.140) 0:00:15.554 ***** 2025-09-23 07:34:15.535568 | orchestrator | changed: [testbed-manager] 2025-09-23 07:34:15.535581 | orchestrator | 2025-09-23 07:34:15.535594 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-09-23 07:34:15.535606 | orchestrator | Tuesday 23 September 2025 07:34:10 +0000 (0:00:00.827) 0:00:16.382 ***** 2025-09-23 07:34:15.535618 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-09-23 07:34:15.535631 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-09-23 07:34:15.535645 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-09-23 07:34:15.535657 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-09-23 07:34:15.535667 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-09-23 07:34:15.535678 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-09-23 07:34:15.535689 | orchestrator | 2025-09-23 07:34:15.535699 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-09-23 07:34:15.535710 | orchestrator | Tuesday 23 September 2025 07:34:12 +0000 (0:00:01.988) 0:00:18.371 ***** 2025-09-23 07:34:15.535720 | orchestrator | ok: [testbed-manager] 2025-09-23 07:34:15.535731 | orchestrator | 2025-09-23 07:34:15.535741 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-09-23 07:34:15.535752 | orchestrator | Tuesday 23 September 2025 07:34:14 +0000 (0:00:01.227) 0:00:19.598 ***** 2025-09-23 07:34:15.535762 | orchestrator | changed: [testbed-manager] 2025-09-23 07:34:15.535773 | orchestrator | 2025-09-23 07:34:15.535784 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:34:15.535794 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-23 07:34:15.535805 | orchestrator | 2025-09-23 07:34:15.535816 | orchestrator | 2025-09-23 07:34:15.535826 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:34:15.535837 | orchestrator | Tuesday 23 September 2025 07:34:15 +0000 (0:00:01.308) 0:00:20.906 ***** 2025-09-23 07:34:15.535847 | orchestrator | =============================================================================== 2025-09-23 07:34:15.535858 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.72s 2025-09-23 07:34:15.535868 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 1.99s 2025-09-23 07:34:15.535879 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.32s 2025-09-23 07:34:15.535889 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.31s 2025-09-23 07:34:15.535915 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.23s 2025-09-23 07:34:15.535927 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.17s 2025-09-23 07:34:15.535937 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.11s 2025-09-23 07:34:15.535948 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.90s 2025-09-23 07:34:15.535958 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 0.83s 2025-09-23 07:34:15.535969 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.74s 2025-09-23 07:34:15.535980 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.22s 2025-09-23 07:34:15.535990 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.14s 2025-09-23 07:34:15.733858 | orchestrator | 2025-09-23 07:34:15.737384 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Tue Sep 23 07:34:15 UTC 2025 2025-09-23 07:34:15.737436 | orchestrator | 2025-09-23 07:34:17.423559 | orchestrator | 2025-09-23 07:34:17 | INFO  | Collection nutshell is prepared for execution 2025-09-23 07:34:17.424343 | orchestrator | 2025-09-23 07:34:17 | INFO  | D [0] - dotfiles 2025-09-23 07:34:27.514655 | orchestrator | 2025-09-23 07:34:27 | INFO  | D [0] - homer 2025-09-23 07:34:27.514735 | orchestrator | 2025-09-23 07:34:27 | INFO  | D [0] - netdata 2025-09-23 07:34:27.514750 | orchestrator | 2025-09-23 07:34:27 | INFO  | D [0] - openstackclient 2025-09-23 07:34:27.514762 | orchestrator | 2025-09-23 07:34:27 | INFO  | D [0] - phpmyadmin 2025-09-23 07:34:27.514774 | orchestrator | 2025-09-23 07:34:27 | INFO  | A [0] - common 2025-09-23 07:34:27.518277 | orchestrator | 2025-09-23 07:34:27 | INFO  | A [1] -- loadbalancer 2025-09-23 07:34:27.518410 | orchestrator | 2025-09-23 07:34:27 | INFO  | D [2] --- opensearch 2025-09-23 07:34:27.518428 | orchestrator | 2025-09-23 07:34:27 | INFO  | A [2] --- mariadb-ng 2025-09-23 07:34:27.518440 | orchestrator | 2025-09-23 07:34:27 | INFO  | D [3] ---- horizon 2025-09-23 07:34:27.518458 | orchestrator | 2025-09-23 07:34:27 | INFO  | A [3] ---- keystone 2025-09-23 07:34:27.518838 | orchestrator | 2025-09-23 07:34:27 | INFO  | A [4] ----- neutron 2025-09-23 07:34:27.519039 | orchestrator | 2025-09-23 07:34:27 | INFO  | D [5] ------ wait-for-nova 2025-09-23 07:34:27.519379 | orchestrator | 2025-09-23 07:34:27 | INFO  | A [5] ------ octavia 2025-09-23 07:34:27.520620 | orchestrator | 2025-09-23 07:34:27 | INFO  | D [4] ----- barbican 2025-09-23 07:34:27.520649 | orchestrator | 2025-09-23 07:34:27 | INFO  | D [4] ----- designate 2025-09-23 07:34:27.520661 | orchestrator | 2025-09-23 07:34:27 | INFO  | D [4] ----- ironic 2025-09-23 07:34:27.520897 | orchestrator | 2025-09-23 07:34:27 | INFO  | D [4] ----- placement 2025-09-23 07:34:27.521107 | orchestrator | 2025-09-23 07:34:27 | INFO  | D [4] ----- magnum 2025-09-23 07:34:27.522159 | orchestrator | 2025-09-23 07:34:27 | INFO  | A [1] -- openvswitch 2025-09-23 07:34:27.522183 | orchestrator | 2025-09-23 07:34:27 | INFO  | D [2] --- ovn 2025-09-23 07:34:27.522694 | orchestrator | 2025-09-23 07:34:27 | INFO  | D [1] -- memcached 2025-09-23 07:34:27.522727 | orchestrator | 2025-09-23 07:34:27 | INFO  | D [1] -- redis 2025-09-23 07:34:27.523030 | orchestrator | 2025-09-23 07:34:27 | INFO  | D [1] -- rabbitmq-ng 2025-09-23 07:34:27.523289 | orchestrator | 2025-09-23 07:34:27 | INFO  | A [0] - kubernetes 2025-09-23 07:34:27.525509 | orchestrator | 2025-09-23 07:34:27 | INFO  | D [1] -- kubeconfig 2025-09-23 07:34:27.525537 | orchestrator | 2025-09-23 07:34:27 | INFO  | A [1] -- copy-kubeconfig 2025-09-23 07:34:27.525901 | orchestrator | 2025-09-23 07:34:27 | INFO  | A [0] - ceph 2025-09-23 07:34:27.527897 | orchestrator | 2025-09-23 07:34:27 | INFO  | A [1] -- ceph-pools 2025-09-23 07:34:27.527956 | orchestrator | 2025-09-23 07:34:27 | INFO  | A [2] --- copy-ceph-keys 2025-09-23 07:34:27.528096 | orchestrator | 2025-09-23 07:34:27 | INFO  | A [3] ---- cephclient 2025-09-23 07:34:27.528142 | orchestrator | 2025-09-23 07:34:27 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-09-23 07:34:27.528261 | orchestrator | 2025-09-23 07:34:27 | INFO  | A [4] ----- wait-for-keystone 2025-09-23 07:34:27.528623 | orchestrator | 2025-09-23 07:34:27 | INFO  | D [5] ------ kolla-ceph-rgw 2025-09-23 07:34:27.528647 | orchestrator | 2025-09-23 07:34:27 | INFO  | D [5] ------ glance 2025-09-23 07:34:27.528838 | orchestrator | 2025-09-23 07:34:27 | INFO  | D [5] ------ cinder 2025-09-23 07:34:27.528859 | orchestrator | 2025-09-23 07:34:27 | INFO  | D [5] ------ nova 2025-09-23 07:34:27.529312 | orchestrator | 2025-09-23 07:34:27 | INFO  | A [4] ----- prometheus 2025-09-23 07:34:27.529334 | orchestrator | 2025-09-23 07:34:27 | INFO  | D [5] ------ grafana 2025-09-23 07:34:27.702103 | orchestrator | 2025-09-23 07:34:27 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-09-23 07:34:27.702209 | orchestrator | 2025-09-23 07:34:27 | INFO  | Tasks are running in the background 2025-09-23 07:34:30.363102 | orchestrator | 2025-09-23 07:34:30 | INFO  | No task IDs specified, wait for all currently running tasks 2025-09-23 07:34:32.476843 | orchestrator | 2025-09-23 07:34:32 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:34:32.479376 | orchestrator | 2025-09-23 07:34:32 | INFO  | Task c6e9b9f7-5ac7-499a-9383-917664ee7a07 is in state STARTED 2025-09-23 07:34:32.479824 | orchestrator | 2025-09-23 07:34:32 | INFO  | Task be22a225-1eee-4fa9-af94-903d6d87cd98 is in state STARTED 2025-09-23 07:34:32.480394 | orchestrator | 2025-09-23 07:34:32 | INFO  | Task b96cd332-ec43-440f-bfc9-7f7bfc925a1c is in state STARTED 2025-09-23 07:34:32.481010 | orchestrator | 2025-09-23 07:34:32 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:34:32.484859 | orchestrator | 2025-09-23 07:34:32 | INFO  | Task 772b9ee1-7a50-4027-8553-ffe00bc1b90f is in state STARTED 2025-09-23 07:34:32.485327 | orchestrator | 2025-09-23 07:34:32 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:34:32.485357 | orchestrator | 2025-09-23 07:34:32 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:34:35.517714 | orchestrator | 2025-09-23 07:34:35 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:34:35.519971 | orchestrator | 2025-09-23 07:34:35 | INFO  | Task c6e9b9f7-5ac7-499a-9383-917664ee7a07 is in state STARTED 2025-09-23 07:34:35.520404 | orchestrator | 2025-09-23 07:34:35 | INFO  | Task be22a225-1eee-4fa9-af94-903d6d87cd98 is in state STARTED 2025-09-23 07:34:35.520893 | orchestrator | 2025-09-23 07:34:35 | INFO  | Task b96cd332-ec43-440f-bfc9-7f7bfc925a1c is in state STARTED 2025-09-23 07:34:35.521359 | orchestrator | 2025-09-23 07:34:35 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:34:35.522932 | orchestrator | 2025-09-23 07:34:35 | INFO  | Task 772b9ee1-7a50-4027-8553-ffe00bc1b90f is in state STARTED 2025-09-23 07:34:35.524786 | orchestrator | 2025-09-23 07:34:35 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:34:35.524831 | orchestrator | 2025-09-23 07:34:35 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:34:38.574937 | orchestrator | 2025-09-23 07:34:38 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:34:38.575020 | orchestrator | 2025-09-23 07:34:38 | INFO  | Task c6e9b9f7-5ac7-499a-9383-917664ee7a07 is in state STARTED 2025-09-23 07:34:38.575034 | orchestrator | 2025-09-23 07:34:38 | INFO  | Task be22a225-1eee-4fa9-af94-903d6d87cd98 is in state STARTED 2025-09-23 07:34:38.575046 | orchestrator | 2025-09-23 07:34:38 | INFO  | Task b96cd332-ec43-440f-bfc9-7f7bfc925a1c is in state STARTED 2025-09-23 07:34:38.575249 | orchestrator | 2025-09-23 07:34:38 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:34:38.576918 | orchestrator | 2025-09-23 07:34:38 | INFO  | Task 772b9ee1-7a50-4027-8553-ffe00bc1b90f is in state STARTED 2025-09-23 07:34:38.578248 | orchestrator | 2025-09-23 07:34:38 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:34:38.578278 | orchestrator | 2025-09-23 07:34:38 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:34:41.683040 | orchestrator | 2025-09-23 07:34:41 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:34:41.683167 | orchestrator | 2025-09-23 07:34:41 | INFO  | Task c6e9b9f7-5ac7-499a-9383-917664ee7a07 is in state STARTED 2025-09-23 07:34:41.683182 | orchestrator | 2025-09-23 07:34:41 | INFO  | Task be22a225-1eee-4fa9-af94-903d6d87cd98 is in state STARTED 2025-09-23 07:34:41.683194 | orchestrator | 2025-09-23 07:34:41 | INFO  | Task b96cd332-ec43-440f-bfc9-7f7bfc925a1c is in state STARTED 2025-09-23 07:34:41.685036 | orchestrator | 2025-09-23 07:34:41 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:34:41.688172 | orchestrator | 2025-09-23 07:34:41 | INFO  | Task 772b9ee1-7a50-4027-8553-ffe00bc1b90f is in state STARTED 2025-09-23 07:34:41.694081 | orchestrator | 2025-09-23 07:34:41 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:34:41.694173 | orchestrator | 2025-09-23 07:34:41 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:34:44.782076 | orchestrator | 2025-09-23 07:34:44 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:34:44.782212 | orchestrator | 2025-09-23 07:34:44 | INFO  | Task c6e9b9f7-5ac7-499a-9383-917664ee7a07 is in state STARTED 2025-09-23 07:34:44.782243 | orchestrator | 2025-09-23 07:34:44 | INFO  | Task be22a225-1eee-4fa9-af94-903d6d87cd98 is in state STARTED 2025-09-23 07:34:44.782265 | orchestrator | 2025-09-23 07:34:44 | INFO  | Task b96cd332-ec43-440f-bfc9-7f7bfc925a1c is in state STARTED 2025-09-23 07:34:44.782302 | orchestrator | 2025-09-23 07:34:44 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:34:44.782321 | orchestrator | 2025-09-23 07:34:44 | INFO  | Task 772b9ee1-7a50-4027-8553-ffe00bc1b90f is in state STARTED 2025-09-23 07:34:44.782358 | orchestrator | 2025-09-23 07:34:44 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:34:44.782378 | orchestrator | 2025-09-23 07:34:44 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:34:47.785903 | orchestrator | 2025-09-23 07:34:47 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:34:47.785992 | orchestrator | 2025-09-23 07:34:47 | INFO  | Task c6e9b9f7-5ac7-499a-9383-917664ee7a07 is in state STARTED 2025-09-23 07:34:47.786005 | orchestrator | 2025-09-23 07:34:47 | INFO  | Task be22a225-1eee-4fa9-af94-903d6d87cd98 is in state STARTED 2025-09-23 07:34:47.786064 | orchestrator | 2025-09-23 07:34:47 | INFO  | Task b96cd332-ec43-440f-bfc9-7f7bfc925a1c is in state STARTED 2025-09-23 07:34:47.786076 | orchestrator | 2025-09-23 07:34:47 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:34:47.786087 | orchestrator | 2025-09-23 07:34:47 | INFO  | Task 772b9ee1-7a50-4027-8553-ffe00bc1b90f is in state STARTED 2025-09-23 07:34:47.787834 | orchestrator | 2025-09-23 07:34:47 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:34:47.787869 | orchestrator | 2025-09-23 07:34:47 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:34:50.861324 | orchestrator | 2025-09-23 07:34:50 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:34:50.864410 | orchestrator | 2025-09-23 07:34:50 | INFO  | Task c6e9b9f7-5ac7-499a-9383-917664ee7a07 is in state STARTED 2025-09-23 07:34:50.869071 | orchestrator | 2025-09-23 07:34:50 | INFO  | Task be22a225-1eee-4fa9-af94-903d6d87cd98 is in state STARTED 2025-09-23 07:34:50.871659 | orchestrator | 2025-09-23 07:34:50 | INFO  | Task b96cd332-ec43-440f-bfc9-7f7bfc925a1c is in state STARTED 2025-09-23 07:34:50.874293 | orchestrator | 2025-09-23 07:34:50 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:34:50.875659 | orchestrator | 2025-09-23 07:34:50 | INFO  | Task 772b9ee1-7a50-4027-8553-ffe00bc1b90f is in state STARTED 2025-09-23 07:34:50.877920 | orchestrator | 2025-09-23 07:34:50 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:34:50.878300 | orchestrator | 2025-09-23 07:34:50 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:34:54.062781 | orchestrator | 2025-09-23 07:34:54 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:34:54.063602 | orchestrator | 2025-09-23 07:34:54 | INFO  | Task c6e9b9f7-5ac7-499a-9383-917664ee7a07 is in state STARTED 2025-09-23 07:34:54.064619 | orchestrator | 2025-09-23 07:34:54 | INFO  | Task be22a225-1eee-4fa9-af94-903d6d87cd98 is in state STARTED 2025-09-23 07:34:54.066240 | orchestrator | 2025-09-23 07:34:54 | INFO  | Task b96cd332-ec43-440f-bfc9-7f7bfc925a1c is in state SUCCESS 2025-09-23 07:34:54.066320 | orchestrator | 2025-09-23 07:34:54.066335 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-09-23 07:34:54.066347 | orchestrator | 2025-09-23 07:34:54.066358 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-09-23 07:34:54.066369 | orchestrator | Tuesday 23 September 2025 07:34:39 +0000 (0:00:00.924) 0:00:00.925 ***** 2025-09-23 07:34:54.066380 | orchestrator | changed: [testbed-manager] 2025-09-23 07:34:54.066391 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:34:54.066402 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:34:54.066412 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:34:54.066428 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:34:54.066448 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:34:54.066468 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:34:54.066486 | orchestrator | 2025-09-23 07:34:54.066515 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-09-23 07:34:54.066546 | orchestrator | Tuesday 23 September 2025 07:34:43 +0000 (0:00:04.395) 0:00:05.320 ***** 2025-09-23 07:34:54.066569 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-23 07:34:54.066587 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-23 07:34:54.066605 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-23 07:34:54.066625 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-23 07:34:54.066644 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-23 07:34:54.066663 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-23 07:34:54.066683 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-23 07:34:54.066702 | orchestrator | 2025-09-23 07:34:54.066720 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-09-23 07:34:54.066738 | orchestrator | Tuesday 23 September 2025 07:34:45 +0000 (0:00:01.313) 0:00:06.634 ***** 2025-09-23 07:34:54.066793 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-23 07:34:44.371481', 'end': '2025-09-23 07:34:44.380507', 'delta': '0:00:00.009026', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-23 07:34:54.066822 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-23 07:34:44.493106', 'end': '2025-09-23 07:34:44.500066', 'delta': '0:00:00.006960', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-23 07:34:54.066869 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-23 07:34:44.466371', 'end': '2025-09-23 07:34:44.472310', 'delta': '0:00:00.005939', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-23 07:34:54.066920 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-23 07:34:44.495680', 'end': '2025-09-23 07:34:44.505655', 'delta': '0:00:00.009975', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-23 07:34:54.066941 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-23 07:34:44.640986', 'end': '2025-09-23 07:34:44.652413', 'delta': '0:00:00.011427', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-23 07:34:54.066968 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-23 07:34:44.767935', 'end': '2025-09-23 07:34:44.779079', 'delta': '0:00:00.011144', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-23 07:34:54.067006 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-23 07:34:44.929913', 'end': '2025-09-23 07:34:44.940123', 'delta': '0:00:00.010210', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-23 07:34:54.067027 | orchestrator | 2025-09-23 07:34:54.067047 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-09-23 07:34:54.067065 | orchestrator | Tuesday 23 September 2025 07:34:46 +0000 (0:00:01.704) 0:00:08.338 ***** 2025-09-23 07:34:54.067086 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-23 07:34:54.067131 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-23 07:34:54.067152 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-23 07:34:54.067171 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-23 07:34:54.067190 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-23 07:34:54.067207 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-23 07:34:54.067224 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-23 07:34:54.067243 | orchestrator | 2025-09-23 07:34:54.067261 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-09-23 07:34:54.067280 | orchestrator | Tuesday 23 September 2025 07:34:48 +0000 (0:00:01.877) 0:00:10.216 ***** 2025-09-23 07:34:54.067297 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-09-23 07:34:54.067314 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-09-23 07:34:54.067333 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-09-23 07:34:54.067352 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-09-23 07:34:54.067372 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-09-23 07:34:54.067390 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-09-23 07:34:54.067408 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-09-23 07:34:54.067428 | orchestrator | 2025-09-23 07:34:54.067447 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:34:54.067480 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:34:54.067501 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:34:54.067519 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:34:54.067537 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:34:54.067556 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:34:54.067573 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:34:54.067592 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:34:54.067631 | orchestrator | 2025-09-23 07:34:54.067657 | orchestrator | 2025-09-23 07:34:54.067675 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:34:54.067691 | orchestrator | Tuesday 23 September 2025 07:34:51 +0000 (0:00:02.472) 0:00:12.689 ***** 2025-09-23 07:34:54.067709 | orchestrator | =============================================================================== 2025-09-23 07:34:54.067737 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.40s 2025-09-23 07:34:54.067766 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.47s 2025-09-23 07:34:54.067786 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.88s 2025-09-23 07:34:54.067804 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.70s 2025-09-23 07:34:54.067822 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.31s 2025-09-23 07:34:54.067840 | orchestrator | 2025-09-23 07:34:54 | INFO  | Task 9fec17cc-e51a-429f-a81e-b8db5e4eb6e3 is in state STARTED 2025-09-23 07:34:54.070173 | orchestrator | 2025-09-23 07:34:54 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:34:54.072300 | orchestrator | 2025-09-23 07:34:54 | INFO  | Task 772b9ee1-7a50-4027-8553-ffe00bc1b90f is in state STARTED 2025-09-23 07:34:54.072622 | orchestrator | 2025-09-23 07:34:54 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:34:54.072694 | orchestrator | 2025-09-23 07:34:54 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:34:57.180578 | orchestrator | 2025-09-23 07:34:57 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:34:57.180642 | orchestrator | 2025-09-23 07:34:57 | INFO  | Task c6e9b9f7-5ac7-499a-9383-917664ee7a07 is in state STARTED 2025-09-23 07:34:57.180651 | orchestrator | 2025-09-23 07:34:57 | INFO  | Task be22a225-1eee-4fa9-af94-903d6d87cd98 is in state STARTED 2025-09-23 07:34:57.182083 | orchestrator | 2025-09-23 07:34:57 | INFO  | Task 9fec17cc-e51a-429f-a81e-b8db5e4eb6e3 is in state STARTED 2025-09-23 07:34:57.182523 | orchestrator | 2025-09-23 07:34:57 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:34:57.184839 | orchestrator | 2025-09-23 07:34:57 | INFO  | Task 772b9ee1-7a50-4027-8553-ffe00bc1b90f is in state STARTED 2025-09-23 07:34:57.186514 | orchestrator | 2025-09-23 07:34:57 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:34:57.186545 | orchestrator | 2025-09-23 07:34:57 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:35:00.218747 | orchestrator | 2025-09-23 07:35:00 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:35:00.246238 | orchestrator | 2025-09-23 07:35:00 | INFO  | Task c6e9b9f7-5ac7-499a-9383-917664ee7a07 is in state STARTED 2025-09-23 07:35:00.246300 | orchestrator | 2025-09-23 07:35:00 | INFO  | Task be22a225-1eee-4fa9-af94-903d6d87cd98 is in state STARTED 2025-09-23 07:35:00.246313 | orchestrator | 2025-09-23 07:35:00 | INFO  | Task 9fec17cc-e51a-429f-a81e-b8db5e4eb6e3 is in state STARTED 2025-09-23 07:35:00.246324 | orchestrator | 2025-09-23 07:35:00 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:35:00.246335 | orchestrator | 2025-09-23 07:35:00 | INFO  | Task 772b9ee1-7a50-4027-8553-ffe00bc1b90f is in state STARTED 2025-09-23 07:35:00.246345 | orchestrator | 2025-09-23 07:35:00 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:35:00.246356 | orchestrator | 2025-09-23 07:35:00 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:35:03.373185 | orchestrator | 2025-09-23 07:35:03 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:35:03.373297 | orchestrator | 2025-09-23 07:35:03 | INFO  | Task c6e9b9f7-5ac7-499a-9383-917664ee7a07 is in state STARTED 2025-09-23 07:35:03.373312 | orchestrator | 2025-09-23 07:35:03 | INFO  | Task be22a225-1eee-4fa9-af94-903d6d87cd98 is in state STARTED 2025-09-23 07:35:03.373323 | orchestrator | 2025-09-23 07:35:03 | INFO  | Task 9fec17cc-e51a-429f-a81e-b8db5e4eb6e3 is in state STARTED 2025-09-23 07:35:03.373334 | orchestrator | 2025-09-23 07:35:03 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:35:03.373344 | orchestrator | 2025-09-23 07:35:03 | INFO  | Task 772b9ee1-7a50-4027-8553-ffe00bc1b90f is in state STARTED 2025-09-23 07:35:03.373355 | orchestrator | 2025-09-23 07:35:03 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:35:03.373366 | orchestrator | 2025-09-23 07:35:03 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:35:06.386083 | orchestrator | 2025-09-23 07:35:06 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:35:06.386187 | orchestrator | 2025-09-23 07:35:06 | INFO  | Task c6e9b9f7-5ac7-499a-9383-917664ee7a07 is in state STARTED 2025-09-23 07:35:06.393582 | orchestrator | 2025-09-23 07:35:06 | INFO  | Task be22a225-1eee-4fa9-af94-903d6d87cd98 is in state STARTED 2025-09-23 07:35:06.393627 | orchestrator | 2025-09-23 07:35:06 | INFO  | Task 9fec17cc-e51a-429f-a81e-b8db5e4eb6e3 is in state STARTED 2025-09-23 07:35:06.393641 | orchestrator | 2025-09-23 07:35:06 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:35:06.393759 | orchestrator | 2025-09-23 07:35:06 | INFO  | Task 772b9ee1-7a50-4027-8553-ffe00bc1b90f is in state STARTED 2025-09-23 07:35:06.398299 | orchestrator | 2025-09-23 07:35:06 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:35:06.398431 | orchestrator | 2025-09-23 07:35:06 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:35:09.463511 | orchestrator | 2025-09-23 07:35:09 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:35:09.463571 | orchestrator | 2025-09-23 07:35:09 | INFO  | Task c6e9b9f7-5ac7-499a-9383-917664ee7a07 is in state STARTED 2025-09-23 07:35:09.463579 | orchestrator | 2025-09-23 07:35:09 | INFO  | Task be22a225-1eee-4fa9-af94-903d6d87cd98 is in state STARTED 2025-09-23 07:35:09.463585 | orchestrator | 2025-09-23 07:35:09 | INFO  | Task 9fec17cc-e51a-429f-a81e-b8db5e4eb6e3 is in state STARTED 2025-09-23 07:35:09.463592 | orchestrator | 2025-09-23 07:35:09 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:35:09.463598 | orchestrator | 2025-09-23 07:35:09 | INFO  | Task 772b9ee1-7a50-4027-8553-ffe00bc1b90f is in state STARTED 2025-09-23 07:35:09.463604 | orchestrator | 2025-09-23 07:35:09 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:35:09.463610 | orchestrator | 2025-09-23 07:35:09 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:35:12.585268 | orchestrator | 2025-09-23 07:35:12 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:35:12.585353 | orchestrator | 2025-09-23 07:35:12 | INFO  | Task c6e9b9f7-5ac7-499a-9383-917664ee7a07 is in state STARTED 2025-09-23 07:35:12.585364 | orchestrator | 2025-09-23 07:35:12 | INFO  | Task be22a225-1eee-4fa9-af94-903d6d87cd98 is in state STARTED 2025-09-23 07:35:12.585378 | orchestrator | 2025-09-23 07:35:12 | INFO  | Task 9fec17cc-e51a-429f-a81e-b8db5e4eb6e3 is in state STARTED 2025-09-23 07:35:12.585392 | orchestrator | 2025-09-23 07:35:12 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:35:12.585445 | orchestrator | 2025-09-23 07:35:12 | INFO  | Task 772b9ee1-7a50-4027-8553-ffe00bc1b90f is in state STARTED 2025-09-23 07:35:12.585466 | orchestrator | 2025-09-23 07:35:12 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:35:12.585482 | orchestrator | 2025-09-23 07:35:12 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:35:15.624677 | orchestrator | 2025-09-23 07:35:15 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:35:15.624757 | orchestrator | 2025-09-23 07:35:15 | INFO  | Task c6e9b9f7-5ac7-499a-9383-917664ee7a07 is in state STARTED 2025-09-23 07:35:15.624771 | orchestrator | 2025-09-23 07:35:15 | INFO  | Task be22a225-1eee-4fa9-af94-903d6d87cd98 is in state STARTED 2025-09-23 07:35:15.624789 | orchestrator | 2025-09-23 07:35:15 | INFO  | Task 9fec17cc-e51a-429f-a81e-b8db5e4eb6e3 is in state STARTED 2025-09-23 07:35:15.624807 | orchestrator | 2025-09-23 07:35:15 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:35:15.624825 | orchestrator | 2025-09-23 07:35:15 | INFO  | Task 772b9ee1-7a50-4027-8553-ffe00bc1b90f is in state STARTED 2025-09-23 07:35:15.624842 | orchestrator | 2025-09-23 07:35:15 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:35:15.624860 | orchestrator | 2025-09-23 07:35:15 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:35:18.664033 | orchestrator | 2025-09-23 07:35:18 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:35:18.664158 | orchestrator | 2025-09-23 07:35:18 | INFO  | Task c6e9b9f7-5ac7-499a-9383-917664ee7a07 is in state SUCCESS 2025-09-23 07:35:18.665932 | orchestrator | 2025-09-23 07:35:18 | INFO  | Task be22a225-1eee-4fa9-af94-903d6d87cd98 is in state STARTED 2025-09-23 07:35:18.668626 | orchestrator | 2025-09-23 07:35:18 | INFO  | Task 9fec17cc-e51a-429f-a81e-b8db5e4eb6e3 is in state STARTED 2025-09-23 07:35:18.670664 | orchestrator | 2025-09-23 07:35:18 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:35:18.672500 | orchestrator | 2025-09-23 07:35:18 | INFO  | Task 772b9ee1-7a50-4027-8553-ffe00bc1b90f is in state STARTED 2025-09-23 07:35:18.674202 | orchestrator | 2025-09-23 07:35:18 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:35:18.674236 | orchestrator | 2025-09-23 07:35:18 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:35:21.718762 | orchestrator | 2025-09-23 07:35:21 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:35:21.718836 | orchestrator | 2025-09-23 07:35:21 | INFO  | Task be22a225-1eee-4fa9-af94-903d6d87cd98 is in state STARTED 2025-09-23 07:35:21.724975 | orchestrator | 2025-09-23 07:35:21 | INFO  | Task 9fec17cc-e51a-429f-a81e-b8db5e4eb6e3 is in state STARTED 2025-09-23 07:35:21.725065 | orchestrator | 2025-09-23 07:35:21 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:35:21.725129 | orchestrator | 2025-09-23 07:35:21 | INFO  | Task 772b9ee1-7a50-4027-8553-ffe00bc1b90f is in state STARTED 2025-09-23 07:35:21.726623 | orchestrator | 2025-09-23 07:35:21 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:35:21.726685 | orchestrator | 2025-09-23 07:35:21 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:35:24.848216 | orchestrator | 2025-09-23 07:35:24 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:35:24.848296 | orchestrator | 2025-09-23 07:35:24 | INFO  | Task be22a225-1eee-4fa9-af94-903d6d87cd98 is in state STARTED 2025-09-23 07:35:24.848334 | orchestrator | 2025-09-23 07:35:24 | INFO  | Task 9fec17cc-e51a-429f-a81e-b8db5e4eb6e3 is in state STARTED 2025-09-23 07:35:24.848346 | orchestrator | 2025-09-23 07:35:24 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:35:24.848357 | orchestrator | 2025-09-23 07:35:24 | INFO  | Task 772b9ee1-7a50-4027-8553-ffe00bc1b90f is in state STARTED 2025-09-23 07:35:24.848368 | orchestrator | 2025-09-23 07:35:24 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:35:24.848392 | orchestrator | 2025-09-23 07:35:24 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:35:27.955034 | orchestrator | 2025-09-23 07:35:27 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:35:27.955157 | orchestrator | 2025-09-23 07:35:27 | INFO  | Task be22a225-1eee-4fa9-af94-903d6d87cd98 is in state STARTED 2025-09-23 07:35:27.955176 | orchestrator | 2025-09-23 07:35:27 | INFO  | Task 9fec17cc-e51a-429f-a81e-b8db5e4eb6e3 is in state STARTED 2025-09-23 07:35:27.955721 | orchestrator | 2025-09-23 07:35:27 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:35:27.956012 | orchestrator | 2025-09-23 07:35:27 | INFO  | Task 772b9ee1-7a50-4027-8553-ffe00bc1b90f is in state SUCCESS 2025-09-23 07:35:27.958158 | orchestrator | 2025-09-23 07:35:27 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:35:27.958192 | orchestrator | 2025-09-23 07:35:27 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:35:31.018748 | orchestrator | 2025-09-23 07:35:31 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:35:31.023840 | orchestrator | 2025-09-23 07:35:31 | INFO  | Task be22a225-1eee-4fa9-af94-903d6d87cd98 is in state STARTED 2025-09-23 07:35:31.027220 | orchestrator | 2025-09-23 07:35:31 | INFO  | Task 9fec17cc-e51a-429f-a81e-b8db5e4eb6e3 is in state STARTED 2025-09-23 07:35:31.031633 | orchestrator | 2025-09-23 07:35:31 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:35:31.041362 | orchestrator | 2025-09-23 07:35:31 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:35:31.041431 | orchestrator | 2025-09-23 07:35:31 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:35:34.291861 | orchestrator | 2025-09-23 07:35:34 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:35:34.291944 | orchestrator | 2025-09-23 07:35:34 | INFO  | Task be22a225-1eee-4fa9-af94-903d6d87cd98 is in state STARTED 2025-09-23 07:35:34.291957 | orchestrator | 2025-09-23 07:35:34 | INFO  | Task 9fec17cc-e51a-429f-a81e-b8db5e4eb6e3 is in state STARTED 2025-09-23 07:35:34.291968 | orchestrator | 2025-09-23 07:35:34 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:35:34.291979 | orchestrator | 2025-09-23 07:35:34 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:35:34.291990 | orchestrator | 2025-09-23 07:35:34 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:35:37.200479 | orchestrator | 2025-09-23 07:35:37 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:35:37.202471 | orchestrator | 2025-09-23 07:35:37 | INFO  | Task be22a225-1eee-4fa9-af94-903d6d87cd98 is in state STARTED 2025-09-23 07:35:37.203520 | orchestrator | 2025-09-23 07:35:37 | INFO  | Task 9fec17cc-e51a-429f-a81e-b8db5e4eb6e3 is in state STARTED 2025-09-23 07:35:37.203759 | orchestrator | 2025-09-23 07:35:37 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:35:37.205699 | orchestrator | 2025-09-23 07:35:37 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:35:37.205763 | orchestrator | 2025-09-23 07:35:37 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:35:40.247534 | orchestrator | 2025-09-23 07:35:40 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:35:40.247617 | orchestrator | 2025-09-23 07:35:40 | INFO  | Task be22a225-1eee-4fa9-af94-903d6d87cd98 is in state STARTED 2025-09-23 07:35:40.248312 | orchestrator | 2025-09-23 07:35:40 | INFO  | Task 9fec17cc-e51a-429f-a81e-b8db5e4eb6e3 is in state STARTED 2025-09-23 07:35:40.251609 | orchestrator | 2025-09-23 07:35:40 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:35:40.253292 | orchestrator | 2025-09-23 07:35:40 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:35:40.253333 | orchestrator | 2025-09-23 07:35:40 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:35:43.301226 | orchestrator | 2025-09-23 07:35:43 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:35:43.301290 | orchestrator | 2025-09-23 07:35:43 | INFO  | Task be22a225-1eee-4fa9-af94-903d6d87cd98 is in state SUCCESS 2025-09-23 07:35:43.303297 | orchestrator | 2025-09-23 07:35:43.303377 | orchestrator | 2025-09-23 07:35:43.303387 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-09-23 07:35:43.303397 | orchestrator | 2025-09-23 07:35:43.303411 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-09-23 07:35:43.303420 | orchestrator | Tuesday 23 September 2025 07:34:38 +0000 (0:00:00.445) 0:00:00.445 ***** 2025-09-23 07:35:43.303428 | orchestrator | ok: [testbed-manager] => { 2025-09-23 07:35:43.303438 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-09-23 07:35:43.303447 | orchestrator | } 2025-09-23 07:35:43.303455 | orchestrator | 2025-09-23 07:35:43.303463 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-09-23 07:35:43.303471 | orchestrator | Tuesday 23 September 2025 07:34:38 +0000 (0:00:00.196) 0:00:00.641 ***** 2025-09-23 07:35:43.303479 | orchestrator | ok: [testbed-manager] 2025-09-23 07:35:43.303487 | orchestrator | 2025-09-23 07:35:43.303495 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-09-23 07:35:43.303503 | orchestrator | Tuesday 23 September 2025 07:34:40 +0000 (0:00:01.877) 0:00:02.518 ***** 2025-09-23 07:35:43.303511 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-09-23 07:35:43.303519 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-09-23 07:35:43.303527 | orchestrator | 2025-09-23 07:35:43.303535 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-09-23 07:35:43.303542 | orchestrator | Tuesday 23 September 2025 07:34:43 +0000 (0:00:03.089) 0:00:05.607 ***** 2025-09-23 07:35:43.303563 | orchestrator | changed: [testbed-manager] 2025-09-23 07:35:43.303571 | orchestrator | 2025-09-23 07:35:43.303580 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-09-23 07:35:43.303588 | orchestrator | Tuesday 23 September 2025 07:34:46 +0000 (0:00:03.043) 0:00:08.651 ***** 2025-09-23 07:35:43.303596 | orchestrator | changed: [testbed-manager] 2025-09-23 07:35:43.303604 | orchestrator | 2025-09-23 07:35:43.303612 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-09-23 07:35:43.303620 | orchestrator | Tuesday 23 September 2025 07:34:48 +0000 (0:00:01.385) 0:00:10.037 ***** 2025-09-23 07:35:43.303628 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-09-23 07:35:43.303636 | orchestrator | ok: [testbed-manager] 2025-09-23 07:35:43.303644 | orchestrator | 2025-09-23 07:35:43.303652 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-09-23 07:35:43.303673 | orchestrator | Tuesday 23 September 2025 07:35:13 +0000 (0:00:25.517) 0:00:35.555 ***** 2025-09-23 07:35:43.303682 | orchestrator | changed: [testbed-manager] 2025-09-23 07:35:43.303690 | orchestrator | 2025-09-23 07:35:43.303698 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:35:43.303706 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:35:43.303714 | orchestrator | 2025-09-23 07:35:43.303722 | orchestrator | 2025-09-23 07:35:43.303730 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:35:43.303738 | orchestrator | Tuesday 23 September 2025 07:35:16 +0000 (0:00:02.816) 0:00:38.372 ***** 2025-09-23 07:35:43.303746 | orchestrator | =============================================================================== 2025-09-23 07:35:43.303754 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.52s 2025-09-23 07:35:43.303762 | orchestrator | osism.services.homer : Create required directories ---------------------- 3.09s 2025-09-23 07:35:43.303770 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 3.04s 2025-09-23 07:35:43.303777 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.82s 2025-09-23 07:35:43.303785 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.88s 2025-09-23 07:35:43.303793 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.39s 2025-09-23 07:35:43.303801 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.20s 2025-09-23 07:35:43.303809 | orchestrator | 2025-09-23 07:35:43.303817 | orchestrator | 2025-09-23 07:35:43.303825 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-09-23 07:35:43.303833 | orchestrator | 2025-09-23 07:35:43.303841 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-09-23 07:35:43.303849 | orchestrator | Tuesday 23 September 2025 07:34:38 +0000 (0:00:00.707) 0:00:00.707 ***** 2025-09-23 07:35:43.303857 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-09-23 07:35:43.303866 | orchestrator | 2025-09-23 07:35:43.303874 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-09-23 07:35:43.303882 | orchestrator | Tuesday 23 September 2025 07:34:39 +0000 (0:00:00.414) 0:00:01.122 ***** 2025-09-23 07:35:43.303890 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-09-23 07:35:43.303898 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-09-23 07:35:43.303906 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-09-23 07:35:43.303914 | orchestrator | 2025-09-23 07:35:43.303922 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-09-23 07:35:43.303930 | orchestrator | Tuesday 23 September 2025 07:34:41 +0000 (0:00:02.065) 0:00:03.187 ***** 2025-09-23 07:35:43.303938 | orchestrator | changed: [testbed-manager] 2025-09-23 07:35:43.303946 | orchestrator | 2025-09-23 07:35:43.303954 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-09-23 07:35:43.303963 | orchestrator | Tuesday 23 September 2025 07:34:43 +0000 (0:00:02.072) 0:00:05.260 ***** 2025-09-23 07:35:43.303981 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-09-23 07:35:43.303989 | orchestrator | ok: [testbed-manager] 2025-09-23 07:35:43.303999 | orchestrator | 2025-09-23 07:35:43.304008 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-09-23 07:35:43.304020 | orchestrator | Tuesday 23 September 2025 07:35:16 +0000 (0:00:33.283) 0:00:38.543 ***** 2025-09-23 07:35:43.304029 | orchestrator | changed: [testbed-manager] 2025-09-23 07:35:43.304038 | orchestrator | 2025-09-23 07:35:43.304048 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-09-23 07:35:43.304057 | orchestrator | Tuesday 23 September 2025 07:35:18 +0000 (0:00:01.846) 0:00:40.389 ***** 2025-09-23 07:35:43.304071 | orchestrator | ok: [testbed-manager] 2025-09-23 07:35:43.304080 | orchestrator | 2025-09-23 07:35:43.304105 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-09-23 07:35:43.304113 | orchestrator | Tuesday 23 September 2025 07:35:19 +0000 (0:00:00.470) 0:00:40.860 ***** 2025-09-23 07:35:43.304120 | orchestrator | changed: [testbed-manager] 2025-09-23 07:35:43.304128 | orchestrator | 2025-09-23 07:35:43.304136 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-09-23 07:35:43.304143 | orchestrator | Tuesday 23 September 2025 07:35:21 +0000 (0:00:02.066) 0:00:42.926 ***** 2025-09-23 07:35:43.304151 | orchestrator | changed: [testbed-manager] 2025-09-23 07:35:43.304159 | orchestrator | 2025-09-23 07:35:43.304167 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-09-23 07:35:43.304175 | orchestrator | Tuesday 23 September 2025 07:35:22 +0000 (0:00:01.312) 0:00:44.239 ***** 2025-09-23 07:35:43.304182 | orchestrator | changed: [testbed-manager] 2025-09-23 07:35:43.304190 | orchestrator | 2025-09-23 07:35:43.304198 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-09-23 07:35:43.304206 | orchestrator | Tuesday 23 September 2025 07:35:23 +0000 (0:00:01.184) 0:00:45.423 ***** 2025-09-23 07:35:43.304213 | orchestrator | ok: [testbed-manager] 2025-09-23 07:35:43.304221 | orchestrator | 2025-09-23 07:35:43.304229 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:35:43.304237 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:35:43.304244 | orchestrator | 2025-09-23 07:35:43.304252 | orchestrator | 2025-09-23 07:35:43.304260 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:35:43.304267 | orchestrator | Tuesday 23 September 2025 07:35:25 +0000 (0:00:01.532) 0:00:46.956 ***** 2025-09-23 07:35:43.304275 | orchestrator | =============================================================================== 2025-09-23 07:35:43.304283 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 33.28s 2025-09-23 07:35:43.304290 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.07s 2025-09-23 07:35:43.304298 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.07s 2025-09-23 07:35:43.304306 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.07s 2025-09-23 07:35:43.304314 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.85s 2025-09-23 07:35:43.304321 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.53s 2025-09-23 07:35:43.304329 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.31s 2025-09-23 07:35:43.304337 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.18s 2025-09-23 07:35:43.304344 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.47s 2025-09-23 07:35:43.304352 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.41s 2025-09-23 07:35:43.304360 | orchestrator | 2025-09-23 07:35:43.304367 | orchestrator | 2025-09-23 07:35:43.304375 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-23 07:35:43.304383 | orchestrator | 2025-09-23 07:35:43.304390 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-23 07:35:43.304398 | orchestrator | Tuesday 23 September 2025 07:34:39 +0000 (0:00:00.347) 0:00:00.347 ***** 2025-09-23 07:35:43.304405 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-09-23 07:35:43.304413 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-09-23 07:35:43.304421 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-09-23 07:35:43.304428 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-09-23 07:35:43.304436 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-09-23 07:35:43.304448 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-09-23 07:35:43.304456 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-09-23 07:35:43.304463 | orchestrator | 2025-09-23 07:35:43.304471 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-09-23 07:35:43.304479 | orchestrator | 2025-09-23 07:35:43.304486 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-09-23 07:35:43.304494 | orchestrator | Tuesday 23 September 2025 07:34:40 +0000 (0:00:00.991) 0:00:01.338 ***** 2025-09-23 07:35:43.304512 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:35:43.304525 | orchestrator | 2025-09-23 07:35:43.304533 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-09-23 07:35:43.304541 | orchestrator | Tuesday 23 September 2025 07:34:41 +0000 (0:00:01.602) 0:00:02.941 ***** 2025-09-23 07:35:43.304549 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:35:43.304556 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:35:43.304564 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:35:43.304572 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:35:43.304580 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:35:43.304592 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:35:43.304600 | orchestrator | ok: [testbed-manager] 2025-09-23 07:35:43.304608 | orchestrator | 2025-09-23 07:35:43.304616 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-09-23 07:35:43.304627 | orchestrator | Tuesday 23 September 2025 07:34:43 +0000 (0:00:01.441) 0:00:04.383 ***** 2025-09-23 07:35:43.304635 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:35:43.304643 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:35:43.304650 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:35:43.304658 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:35:43.304665 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:35:43.304673 | orchestrator | ok: [testbed-manager] 2025-09-23 07:35:43.304681 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:35:43.304688 | orchestrator | 2025-09-23 07:35:43.304696 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-09-23 07:35:43.304704 | orchestrator | Tuesday 23 September 2025 07:34:46 +0000 (0:00:03.503) 0:00:07.886 ***** 2025-09-23 07:35:43.304711 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:35:43.304719 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:35:43.304727 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:35:43.304735 | orchestrator | changed: [testbed-manager] 2025-09-23 07:35:43.304742 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:35:43.304750 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:35:43.304758 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:35:43.304765 | orchestrator | 2025-09-23 07:35:43.304773 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-09-23 07:35:43.304781 | orchestrator | Tuesday 23 September 2025 07:34:48 +0000 (0:00:01.902) 0:00:09.789 ***** 2025-09-23 07:35:43.304789 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:35:43.304796 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:35:43.304804 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:35:43.304812 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:35:43.304819 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:35:43.304827 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:35:43.304835 | orchestrator | changed: [testbed-manager] 2025-09-23 07:35:43.304843 | orchestrator | 2025-09-23 07:35:43.304850 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-09-23 07:35:43.304858 | orchestrator | Tuesday 23 September 2025 07:34:59 +0000 (0:00:10.516) 0:00:20.306 ***** 2025-09-23 07:35:43.304866 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:35:43.304873 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:35:43.304881 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:35:43.304893 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:35:43.304901 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:35:43.304908 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:35:43.304916 | orchestrator | changed: [testbed-manager] 2025-09-23 07:35:43.304924 | orchestrator | 2025-09-23 07:35:43.304931 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-09-23 07:35:43.304939 | orchestrator | Tuesday 23 September 2025 07:35:20 +0000 (0:00:21.188) 0:00:41.494 ***** 2025-09-23 07:35:43.304947 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:35:43.304956 | orchestrator | 2025-09-23 07:35:43.304964 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-09-23 07:35:43.304972 | orchestrator | Tuesday 23 September 2025 07:35:21 +0000 (0:00:01.378) 0:00:42.873 ***** 2025-09-23 07:35:43.304979 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-09-23 07:35:43.304987 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-09-23 07:35:43.304995 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-09-23 07:35:43.305003 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-09-23 07:35:43.305011 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-09-23 07:35:43.305018 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-09-23 07:35:43.305026 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-09-23 07:35:43.305034 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-09-23 07:35:43.305042 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-09-23 07:35:43.305049 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-09-23 07:35:43.305057 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-09-23 07:35:43.305065 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-09-23 07:35:43.305072 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-09-23 07:35:43.305080 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-09-23 07:35:43.305099 | orchestrator | 2025-09-23 07:35:43.305107 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-09-23 07:35:43.305114 | orchestrator | Tuesday 23 September 2025 07:35:27 +0000 (0:00:05.886) 0:00:48.759 ***** 2025-09-23 07:35:43.305122 | orchestrator | ok: [testbed-manager] 2025-09-23 07:35:43.305130 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:35:43.305138 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:35:43.305145 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:35:43.305153 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:35:43.305161 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:35:43.305168 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:35:43.305176 | orchestrator | 2025-09-23 07:35:43.305184 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-09-23 07:35:43.305192 | orchestrator | Tuesday 23 September 2025 07:35:28 +0000 (0:00:01.029) 0:00:49.789 ***** 2025-09-23 07:35:43.305199 | orchestrator | changed: [testbed-manager] 2025-09-23 07:35:43.305207 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:35:43.305215 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:35:43.305223 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:35:43.305230 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:35:43.305238 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:35:43.305246 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:35:43.305253 | orchestrator | 2025-09-23 07:35:43.305261 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-09-23 07:35:43.305273 | orchestrator | Tuesday 23 September 2025 07:35:30 +0000 (0:00:01.549) 0:00:51.338 ***** 2025-09-23 07:35:43.305281 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:35:43.305289 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:35:43.305296 | orchestrator | ok: [testbed-manager] 2025-09-23 07:35:43.305308 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:35:43.305319 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:35:43.305327 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:35:43.305335 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:35:43.305343 | orchestrator | 2025-09-23 07:35:43.305350 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-09-23 07:35:43.305358 | orchestrator | Tuesday 23 September 2025 07:35:32 +0000 (0:00:02.044) 0:00:53.382 ***** 2025-09-23 07:35:43.305366 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:35:43.305374 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:35:43.305381 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:35:43.305389 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:35:43.305396 | orchestrator | ok: [testbed-manager] 2025-09-23 07:35:43.305404 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:35:43.305412 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:35:43.305419 | orchestrator | 2025-09-23 07:35:43.305427 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-09-23 07:35:43.305435 | orchestrator | Tuesday 23 September 2025 07:35:35 +0000 (0:00:02.774) 0:00:56.157 ***** 2025-09-23 07:35:43.305443 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-09-23 07:35:43.305452 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:35:43.305460 | orchestrator | 2025-09-23 07:35:43.305468 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-09-23 07:35:43.305475 | orchestrator | Tuesday 23 September 2025 07:35:36 +0000 (0:00:01.391) 0:00:57.548 ***** 2025-09-23 07:35:43.305483 | orchestrator | changed: [testbed-manager] 2025-09-23 07:35:43.305491 | orchestrator | 2025-09-23 07:35:43.305498 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-09-23 07:35:43.305506 | orchestrator | Tuesday 23 September 2025 07:35:38 +0000 (0:00:01.750) 0:00:59.299 ***** 2025-09-23 07:35:43.305514 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:35:43.305522 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:35:43.305530 | orchestrator | changed: [testbed-manager] 2025-09-23 07:35:43.305537 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:35:43.305545 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:35:43.305552 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:35:43.305560 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:35:43.305568 | orchestrator | 2025-09-23 07:35:43.305575 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:35:43.305583 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:35:43.305591 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:35:43.305599 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:35:43.305607 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:35:43.305615 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:35:43.305623 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:35:43.305630 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:35:43.305638 | orchestrator | 2025-09-23 07:35:43.305650 | orchestrator | 2025-09-23 07:35:43.305657 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:35:43.305676 | orchestrator | Tuesday 23 September 2025 07:35:41 +0000 (0:00:03.319) 0:01:02.618 ***** 2025-09-23 07:35:43.305685 | orchestrator | =============================================================================== 2025-09-23 07:35:43.305700 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 21.19s 2025-09-23 07:35:43.305708 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.52s 2025-09-23 07:35:43.305716 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.89s 2025-09-23 07:35:43.305724 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.50s 2025-09-23 07:35:43.305732 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.32s 2025-09-23 07:35:43.305739 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.77s 2025-09-23 07:35:43.305747 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.04s 2025-09-23 07:35:43.305755 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.90s 2025-09-23 07:35:43.305762 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.75s 2025-09-23 07:35:43.305770 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.60s 2025-09-23 07:35:43.305778 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.55s 2025-09-23 07:35:43.305790 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.44s 2025-09-23 07:35:43.305798 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.39s 2025-09-23 07:35:43.305806 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.38s 2025-09-23 07:35:43.305813 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.03s 2025-09-23 07:35:43.305821 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.99s 2025-09-23 07:35:43.305829 | orchestrator | 2025-09-23 07:35:43 | INFO  | Task 9fec17cc-e51a-429f-a81e-b8db5e4eb6e3 is in state STARTED 2025-09-23 07:35:43.305837 | orchestrator | 2025-09-23 07:35:43 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:35:43.305845 | orchestrator | 2025-09-23 07:35:43 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:35:43.305852 | orchestrator | 2025-09-23 07:35:43 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:35:46.331395 | orchestrator | 2025-09-23 07:35:46 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:35:46.331455 | orchestrator | 2025-09-23 07:35:46 | INFO  | Task 9fec17cc-e51a-429f-a81e-b8db5e4eb6e3 is in state STARTED 2025-09-23 07:35:46.331759 | orchestrator | 2025-09-23 07:35:46 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:35:46.332778 | orchestrator | 2025-09-23 07:35:46 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:35:46.332861 | orchestrator | 2025-09-23 07:35:46 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:35:49.358557 | orchestrator | 2025-09-23 07:35:49 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:35:49.358900 | orchestrator | 2025-09-23 07:35:49 | INFO  | Task 9fec17cc-e51a-429f-a81e-b8db5e4eb6e3 is in state STARTED 2025-09-23 07:35:49.359931 | orchestrator | 2025-09-23 07:35:49 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:35:49.360751 | orchestrator | 2025-09-23 07:35:49 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:35:49.361039 | orchestrator | 2025-09-23 07:35:49 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:35:52.392358 | orchestrator | 2025-09-23 07:35:52 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:35:52.392863 | orchestrator | 2025-09-23 07:35:52 | INFO  | Task 9fec17cc-e51a-429f-a81e-b8db5e4eb6e3 is in state STARTED 2025-09-23 07:35:52.393770 | orchestrator | 2025-09-23 07:35:52 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:35:52.394815 | orchestrator | 2025-09-23 07:35:52 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:35:52.395949 | orchestrator | 2025-09-23 07:35:52 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:35:55.432667 | orchestrator | 2025-09-23 07:35:55 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:35:55.434004 | orchestrator | 2025-09-23 07:35:55 | INFO  | Task 9fec17cc-e51a-429f-a81e-b8db5e4eb6e3 is in state STARTED 2025-09-23 07:35:55.435765 | orchestrator | 2025-09-23 07:35:55 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:35:55.435828 | orchestrator | 2025-09-23 07:35:55 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:35:55.435843 | orchestrator | 2025-09-23 07:35:55 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:35:58.464908 | orchestrator | 2025-09-23 07:35:58 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:35:58.465861 | orchestrator | 2025-09-23 07:35:58 | INFO  | Task 9fec17cc-e51a-429f-a81e-b8db5e4eb6e3 is in state STARTED 2025-09-23 07:35:58.468011 | orchestrator | 2025-09-23 07:35:58 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:35:58.470743 | orchestrator | 2025-09-23 07:35:58 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:35:58.470792 | orchestrator | 2025-09-23 07:35:58 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:36:01.509302 | orchestrator | 2025-09-23 07:36:01 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:36:01.510686 | orchestrator | 2025-09-23 07:36:01 | INFO  | Task 9fec17cc-e51a-429f-a81e-b8db5e4eb6e3 is in state SUCCESS 2025-09-23 07:36:01.510727 | orchestrator | 2025-09-23 07:36:01 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:36:01.512552 | orchestrator | 2025-09-23 07:36:01 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:36:01.512653 | orchestrator | 2025-09-23 07:36:01 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:36:04.557837 | orchestrator | 2025-09-23 07:36:04 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:36:04.560552 | orchestrator | 2025-09-23 07:36:04 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:36:04.561533 | orchestrator | 2025-09-23 07:36:04 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:36:04.561771 | orchestrator | 2025-09-23 07:36:04 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:36:07.601115 | orchestrator | 2025-09-23 07:36:07 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:36:07.602799 | orchestrator | 2025-09-23 07:36:07 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:36:07.604815 | orchestrator | 2025-09-23 07:36:07 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:36:07.605950 | orchestrator | 2025-09-23 07:36:07 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:36:10.639258 | orchestrator | 2025-09-23 07:36:10 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:36:10.639790 | orchestrator | 2025-09-23 07:36:10 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:36:10.642503 | orchestrator | 2025-09-23 07:36:10 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:36:10.642530 | orchestrator | 2025-09-23 07:36:10 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:36:13.683924 | orchestrator | 2025-09-23 07:36:13 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:36:13.684648 | orchestrator | 2025-09-23 07:36:13 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:36:13.686595 | orchestrator | 2025-09-23 07:36:13 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:36:13.686657 | orchestrator | 2025-09-23 07:36:13 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:36:16.715320 | orchestrator | 2025-09-23 07:36:16 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:36:16.715790 | orchestrator | 2025-09-23 07:36:16 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:36:16.716963 | orchestrator | 2025-09-23 07:36:16 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:36:16.716994 | orchestrator | 2025-09-23 07:36:16 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:36:19.748020 | orchestrator | 2025-09-23 07:36:19 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:36:19.751515 | orchestrator | 2025-09-23 07:36:19 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:36:19.754129 | orchestrator | 2025-09-23 07:36:19 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:36:19.754215 | orchestrator | 2025-09-23 07:36:19 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:36:22.785435 | orchestrator | 2025-09-23 07:36:22 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:36:22.786334 | orchestrator | 2025-09-23 07:36:22 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:36:22.788782 | orchestrator | 2025-09-23 07:36:22 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:36:22.788826 | orchestrator | 2025-09-23 07:36:22 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:36:25.832453 | orchestrator | 2025-09-23 07:36:25 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:36:25.834481 | orchestrator | 2025-09-23 07:36:25 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:36:25.836770 | orchestrator | 2025-09-23 07:36:25 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:36:25.836864 | orchestrator | 2025-09-23 07:36:25 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:36:28.881652 | orchestrator | 2025-09-23 07:36:28 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:36:28.882491 | orchestrator | 2025-09-23 07:36:28 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:36:28.887200 | orchestrator | 2025-09-23 07:36:28 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:36:28.887277 | orchestrator | 2025-09-23 07:36:28 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:36:31.923371 | orchestrator | 2025-09-23 07:36:31 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:36:31.925233 | orchestrator | 2025-09-23 07:36:31 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:36:31.925273 | orchestrator | 2025-09-23 07:36:31 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:36:31.925286 | orchestrator | 2025-09-23 07:36:31 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:36:34.971242 | orchestrator | 2025-09-23 07:36:34 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:36:34.971346 | orchestrator | 2025-09-23 07:36:34 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:36:34.971771 | orchestrator | 2025-09-23 07:36:34 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:36:34.971989 | orchestrator | 2025-09-23 07:36:34 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:36:38.012998 | orchestrator | 2025-09-23 07:36:38 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:36:38.016505 | orchestrator | 2025-09-23 07:36:38 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:36:38.017970 | orchestrator | 2025-09-23 07:36:38 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:36:38.018468 | orchestrator | 2025-09-23 07:36:38 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:36:41.052342 | orchestrator | 2025-09-23 07:36:41 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:36:41.053719 | orchestrator | 2025-09-23 07:36:41 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:36:41.055445 | orchestrator | 2025-09-23 07:36:41 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:36:41.055474 | orchestrator | 2025-09-23 07:36:41 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:36:44.080221 | orchestrator | 2025-09-23 07:36:44 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:36:44.080826 | orchestrator | 2025-09-23 07:36:44 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:36:44.081634 | orchestrator | 2025-09-23 07:36:44 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:36:44.081653 | orchestrator | 2025-09-23 07:36:44 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:36:47.129618 | orchestrator | 2025-09-23 07:36:47 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:36:47.132023 | orchestrator | 2025-09-23 07:36:47 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:36:47.134686 | orchestrator | 2025-09-23 07:36:47 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state STARTED 2025-09-23 07:36:47.134992 | orchestrator | 2025-09-23 07:36:47 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:36:50.179373 | orchestrator | 2025-09-23 07:36:50 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:36:50.180362 | orchestrator | 2025-09-23 07:36:50 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:36:50.180862 | orchestrator | 2025-09-23 07:36:50 | INFO  | Task c4b971d4-8cc3-4cf6-bd1a-62c8a8d9947b is in state STARTED 2025-09-23 07:36:50.181840 | orchestrator | 2025-09-23 07:36:50 | INFO  | Task c3d524dd-8e7a-4152-826e-5383ea4d51e6 is in state STARTED 2025-09-23 07:36:50.182403 | orchestrator | 2025-09-23 07:36:50 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:36:50.183781 | orchestrator | 2025-09-23 07:36:50 | INFO  | Task 77691318-0af4-4a5f-a396-b827e7cfd8b1 is in state STARTED 2025-09-23 07:36:50.190152 | orchestrator | 2025-09-23 07:36:50 | INFO  | Task 572009ee-259b-4c53-863e-0abaf07b69de is in state SUCCESS 2025-09-23 07:36:50.191783 | orchestrator | 2025-09-23 07:36:50.191811 | orchestrator | 2025-09-23 07:36:50.191820 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-09-23 07:36:50.191828 | orchestrator | 2025-09-23 07:36:50.191835 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-09-23 07:36:50.191842 | orchestrator | Tuesday 23 September 2025 07:34:55 +0000 (0:00:00.203) 0:00:00.203 ***** 2025-09-23 07:36:50.191850 | orchestrator | ok: [testbed-manager] 2025-09-23 07:36:50.191857 | orchestrator | 2025-09-23 07:36:50.191864 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-09-23 07:36:50.191870 | orchestrator | Tuesday 23 September 2025 07:34:57 +0000 (0:00:01.456) 0:00:01.659 ***** 2025-09-23 07:36:50.191890 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-09-23 07:36:50.191898 | orchestrator | 2025-09-23 07:36:50.191904 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-09-23 07:36:50.191934 | orchestrator | Tuesday 23 September 2025 07:34:57 +0000 (0:00:00.550) 0:00:02.210 ***** 2025-09-23 07:36:50.191942 | orchestrator | changed: [testbed-manager] 2025-09-23 07:36:50.191949 | orchestrator | 2025-09-23 07:36:50.191956 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-09-23 07:36:50.191962 | orchestrator | Tuesday 23 September 2025 07:34:59 +0000 (0:00:01.393) 0:00:03.603 ***** 2025-09-23 07:36:50.191968 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-09-23 07:36:50.191975 | orchestrator | ok: [testbed-manager] 2025-09-23 07:36:50.191982 | orchestrator | 2025-09-23 07:36:50.191988 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-09-23 07:36:50.192002 | orchestrator | Tuesday 23 September 2025 07:35:54 +0000 (0:00:54.870) 0:00:58.474 ***** 2025-09-23 07:36:50.192008 | orchestrator | changed: [testbed-manager] 2025-09-23 07:36:50.192014 | orchestrator | 2025-09-23 07:36:50.192021 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:36:50.192033 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:36:50.192087 | orchestrator | 2025-09-23 07:36:50.192095 | orchestrator | 2025-09-23 07:36:50.192102 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:36:50.192108 | orchestrator | Tuesday 23 September 2025 07:35:59 +0000 (0:00:05.601) 0:01:04.076 ***** 2025-09-23 07:36:50.192138 | orchestrator | =============================================================================== 2025-09-23 07:36:50.192144 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 54.87s 2025-09-23 07:36:50.192150 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 5.60s 2025-09-23 07:36:50.192156 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.46s 2025-09-23 07:36:50.192178 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.39s 2025-09-23 07:36:50.192186 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.55s 2025-09-23 07:36:50.192193 | orchestrator | 2025-09-23 07:36:50.192199 | orchestrator | 2025-09-23 07:36:50.192207 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-09-23 07:36:50.192214 | orchestrator | 2025-09-23 07:36:50.192221 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-23 07:36:50.192228 | orchestrator | Tuesday 23 September 2025 07:34:31 +0000 (0:00:00.391) 0:00:00.391 ***** 2025-09-23 07:36:50.192235 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:36:50.192275 | orchestrator | 2025-09-23 07:36:50.192303 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-09-23 07:36:50.192337 | orchestrator | Tuesday 23 September 2025 07:34:33 +0000 (0:00:01.488) 0:00:01.879 ***** 2025-09-23 07:36:50.192345 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-23 07:36:50.192352 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-23 07:36:50.192358 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-23 07:36:50.192372 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-23 07:36:50.192378 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-23 07:36:50.192385 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-23 07:36:50.192392 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-23 07:36:50.192406 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-23 07:36:50.192414 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-23 07:36:50.192421 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-23 07:36:50.192445 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-23 07:36:50.192453 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-23 07:36:50.192461 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-23 07:36:50.192475 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-23 07:36:50.192484 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-23 07:36:50.192492 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-23 07:36:50.192508 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-23 07:36:50.192515 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-23 07:36:50.192523 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-23 07:36:50.192529 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-23 07:36:50.192537 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-23 07:36:50.192545 | orchestrator | 2025-09-23 07:36:50.192557 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-23 07:36:50.192565 | orchestrator | Tuesday 23 September 2025 07:34:37 +0000 (0:00:04.194) 0:00:06.074 ***** 2025-09-23 07:36:50.192573 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:36:50.192581 | orchestrator | 2025-09-23 07:36:50.192589 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-09-23 07:36:50.192597 | orchestrator | Tuesday 23 September 2025 07:34:38 +0000 (0:00:01.354) 0:00:07.428 ***** 2025-09-23 07:36:50.192607 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-23 07:36:50.192617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-23 07:36:50.192631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-23 07:36:50.192645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-23 07:36:50.192653 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-23 07:36:50.192666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.192677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.192685 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-23 07:36:50.192700 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.192712 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-23 07:36:50.192720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.192728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.192743 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.192756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.192769 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.192777 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.192788 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.192794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.192801 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.192808 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.192847 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.192854 | orchestrator | 2025-09-23 07:36:50.192862 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-09-23 07:36:50.192869 | orchestrator | Tuesday 23 September 2025 07:34:45 +0000 (0:00:06.437) 0:00:13.865 ***** 2025-09-23 07:36:50.192880 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-23 07:36:50.192892 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:36:50.192899 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:36:50.192910 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:36:50.192917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-23 07:36:50.192925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:36:50.192932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:36:50.192938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-23 07:36:50.192945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:36:50.192959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:36:50.192965 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:36:50.192974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-23 07:36:50.192985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:36:50.192992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:36:50.192998 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:36:50.193005 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-23 07:36:50.193011 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:36:50.193017 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:36:50.193023 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:36:50.193030 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:36:50.193036 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-23 07:36:50.193086 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:36:50.193107 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:36:50.193115 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:36:50.193122 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-23 07:36:50.193129 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:36:50.193137 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:36:50.193144 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:36:50.193151 | orchestrator | 2025-09-23 07:36:50.193157 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-09-23 07:36:50.193165 | orchestrator | Tuesday 23 September 2025 07:34:47 +0000 (0:00:01.753) 0:00:15.619 ***** 2025-09-23 07:36:50.193172 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-23 07:36:50.193179 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:36:50.193197 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:36:50.193209 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:36:50.193218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-23 07:36:50.193232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:36:50.193239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:36:50.193245 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:36:50.193252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-23 07:36:50.193259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:36:50.193266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:36:50.193279 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:36:50.193292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-23 07:36:50.193312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:36:50.193320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:36:50.193327 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:36:50.193334 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-23 07:36:50.193341 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:36:50.193348 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:36:50.193360 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:36:50.193367 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-23 07:36:50.193374 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:36:50.193388 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:36:50.193395 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:36:50.193411 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-23 07:36:50.193418 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:36:50.193432 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:36:50.193439 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:36:50.193445 | orchestrator | 2025-09-23 07:36:50.193451 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-09-23 07:36:50.193458 | orchestrator | Tuesday 23 September 2025 07:34:50 +0000 (0:00:03.546) 0:00:19.165 ***** 2025-09-23 07:36:50.193464 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:36:50.193471 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:36:50.193477 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:36:50.193483 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:36:50.193490 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:36:50.193496 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:36:50.193502 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:36:50.193508 | orchestrator | 2025-09-23 07:36:50.193514 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-09-23 07:36:50.193521 | orchestrator | Tuesday 23 September 2025 07:34:53 +0000 (0:00:02.533) 0:00:21.699 ***** 2025-09-23 07:36:50.193528 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:36:50.193535 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:36:50.193542 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:36:50.193549 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:36:50.193556 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:36:50.193563 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:36:50.193570 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:36:50.193577 | orchestrator | 2025-09-23 07:36:50.193584 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-09-23 07:36:50.193597 | orchestrator | Tuesday 23 September 2025 07:34:54 +0000 (0:00:01.378) 0:00:23.077 ***** 2025-09-23 07:36:50.193609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-23 07:36:50.193616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-23 07:36:50.193630 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-23 07:36:50.193643 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-23 07:36:50.193650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-23 07:36:50.193657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.193664 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-23 07:36:50.193671 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-23 07:36:50.193681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.193689 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.193701 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.193708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.193715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.193722 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.193729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.193744 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.193760 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.193775 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.193782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.193790 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.193803 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.193810 | orchestrator | 2025-09-23 07:36:50.193816 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-09-23 07:36:50.193823 | orchestrator | Tuesday 23 September 2025 07:34:59 +0000 (0:00:04.846) 0:00:27.924 ***** 2025-09-23 07:36:50.193830 | orchestrator | [WARNING]: Skipped 2025-09-23 07:36:50.193837 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-09-23 07:36:50.193843 | orchestrator | to this access issue: 2025-09-23 07:36:50.193850 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-09-23 07:36:50.193857 | orchestrator | directory 2025-09-23 07:36:50.193864 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-23 07:36:50.193871 | orchestrator | 2025-09-23 07:36:50.193878 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-09-23 07:36:50.193889 | orchestrator | Tuesday 23 September 2025 07:35:00 +0000 (0:00:01.056) 0:00:28.980 ***** 2025-09-23 07:36:50.193896 | orchestrator | [WARNING]: Skipped 2025-09-23 07:36:50.193903 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-09-23 07:36:50.193910 | orchestrator | to this access issue: 2025-09-23 07:36:50.193917 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-09-23 07:36:50.193924 | orchestrator | directory 2025-09-23 07:36:50.193931 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-23 07:36:50.193938 | orchestrator | 2025-09-23 07:36:50.193945 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-09-23 07:36:50.193953 | orchestrator | Tuesday 23 September 2025 07:35:01 +0000 (0:00:00.945) 0:00:29.925 ***** 2025-09-23 07:36:50.193959 | orchestrator | [WARNING]: Skipped 2025-09-23 07:36:50.193966 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-09-23 07:36:50.193974 | orchestrator | to this access issue: 2025-09-23 07:36:50.193987 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-09-23 07:36:50.193995 | orchestrator | directory 2025-09-23 07:36:50.194002 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-23 07:36:50.194008 | orchestrator | 2025-09-23 07:36:50.194074 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-09-23 07:36:50.194085 | orchestrator | Tuesday 23 September 2025 07:35:02 +0000 (0:00:00.971) 0:00:30.897 ***** 2025-09-23 07:36:50.194092 | orchestrator | [WARNING]: Skipped 2025-09-23 07:36:50.194099 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-09-23 07:36:50.194105 | orchestrator | to this access issue: 2025-09-23 07:36:50.194112 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-09-23 07:36:50.194118 | orchestrator | directory 2025-09-23 07:36:50.194125 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-23 07:36:50.194132 | orchestrator | 2025-09-23 07:36:50.194139 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-09-23 07:36:50.194145 | orchestrator | Tuesday 23 September 2025 07:35:03 +0000 (0:00:01.164) 0:00:32.062 ***** 2025-09-23 07:36:50.194152 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:36:50.194167 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:36:50.194173 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:36:50.194179 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:36:50.194186 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:36:50.194192 | orchestrator | changed: [testbed-manager] 2025-09-23 07:36:50.194198 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:36:50.194204 | orchestrator | 2025-09-23 07:36:50.194211 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-09-23 07:36:50.194217 | orchestrator | Tuesday 23 September 2025 07:35:07 +0000 (0:00:04.182) 0:00:36.244 ***** 2025-09-23 07:36:50.194223 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-23 07:36:50.194231 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-23 07:36:50.194238 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-23 07:36:50.194251 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-23 07:36:50.194265 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-23 07:36:50.194272 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-23 07:36:50.194279 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-23 07:36:50.194286 | orchestrator | 2025-09-23 07:36:50.194302 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-09-23 07:36:50.194310 | orchestrator | Tuesday 23 September 2025 07:35:10 +0000 (0:00:03.085) 0:00:39.330 ***** 2025-09-23 07:36:50.194323 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:36:50.194330 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:36:50.194343 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:36:50.194350 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:36:50.194357 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:36:50.194364 | orchestrator | changed: [testbed-manager] 2025-09-23 07:36:50.194371 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:36:50.194378 | orchestrator | 2025-09-23 07:36:50.194385 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-09-23 07:36:50.194392 | orchestrator | Tuesday 23 September 2025 07:35:13 +0000 (0:00:02.932) 0:00:42.262 ***** 2025-09-23 07:36:50.194400 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-23 07:36:50.194408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:36:50.194415 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-23 07:36:50.194423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:36:50.194430 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-23 07:36:50.194442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:36:50.194456 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.194466 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.194474 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-23 07:36:50.194482 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:36:50.194489 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.194503 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-23 07:36:50.194511 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:36:50.194526 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-23 07:36:50.194536 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:36:50.194543 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.194551 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-23 07:36:50.194558 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:36:50.194566 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.194573 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.194580 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.194591 | orchestrator | 2025-09-23 07:36:50.194598 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-09-23 07:36:50.194605 | orchestrator | Tuesday 23 September 2025 07:35:16 +0000 (0:00:02.525) 0:00:44.787 ***** 2025-09-23 07:36:50.194612 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-23 07:36:50.194619 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-23 07:36:50.194627 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-23 07:36:50.194639 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-23 07:36:50.194647 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-23 07:36:50.194660 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-23 07:36:50.194667 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-23 07:36:50.194674 | orchestrator | 2025-09-23 07:36:50.194680 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-09-23 07:36:50.194690 | orchestrator | Tuesday 23 September 2025 07:35:19 +0000 (0:00:02.720) 0:00:47.508 ***** 2025-09-23 07:36:50.194697 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-23 07:36:50.194703 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-23 07:36:50.194710 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-23 07:36:50.194716 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-23 07:36:50.194723 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-23 07:36:50.194730 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-23 07:36:50.194736 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-23 07:36:50.194743 | orchestrator | 2025-09-23 07:36:50.194750 | orchestrator | TASK [common : Check common containers] **************************************** 2025-09-23 07:36:50.194756 | orchestrator | Tuesday 23 September 2025 07:35:21 +0000 (0:00:02.150) 0:00:49.659 ***** 2025-09-23 07:36:50.194763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-23 07:36:50.194769 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-23 07:36:50.194776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-23 07:36:50.194787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-23 07:36:50.194794 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-23 07:36:50.194806 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-23 07:36:50.194817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.194825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.194832 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-23 07:36:50.194839 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.194850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.194857 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.194868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.194879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.194886 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.194894 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.194901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.194908 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.194930 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.194937 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.194944 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:36:50.194951 | orchestrator | 2025-09-23 07:36:50.194962 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-09-23 07:36:50.194969 | orchestrator | Tuesday 23 September 2025 07:35:25 +0000 (0:00:04.438) 0:00:54.098 ***** 2025-09-23 07:36:50.194975 | orchestrator | changed: [testbed-manager] 2025-09-23 07:36:50.194982 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:36:50.194989 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:36:50.194995 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:36:50.195002 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:36:50.195009 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:36:50.195014 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:36:50.195020 | orchestrator | 2025-09-23 07:36:50.195027 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-09-23 07:36:50.195036 | orchestrator | Tuesday 23 September 2025 07:35:27 +0000 (0:00:01.922) 0:00:56.020 ***** 2025-09-23 07:36:50.195053 | orchestrator | changed: [testbed-manager] 2025-09-23 07:36:50.195060 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:36:50.195066 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:36:50.195073 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:36:50.195080 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:36:50.195087 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:36:50.195094 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:36:50.195100 | orchestrator | 2025-09-23 07:36:50.195106 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-23 07:36:50.195112 | orchestrator | Tuesday 23 September 2025 07:35:28 +0000 (0:00:01.341) 0:00:57.361 ***** 2025-09-23 07:36:50.195118 | orchestrator | 2025-09-23 07:36:50.195125 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-23 07:36:50.195131 | orchestrator | Tuesday 23 September 2025 07:35:28 +0000 (0:00:00.062) 0:00:57.424 ***** 2025-09-23 07:36:50.195137 | orchestrator | 2025-09-23 07:36:50.195143 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-23 07:36:50.195149 | orchestrator | Tuesday 23 September 2025 07:35:28 +0000 (0:00:00.071) 0:00:57.496 ***** 2025-09-23 07:36:50.195164 | orchestrator | 2025-09-23 07:36:50.195170 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-23 07:36:50.195177 | orchestrator | Tuesday 23 September 2025 07:35:29 +0000 (0:00:00.113) 0:00:57.609 ***** 2025-09-23 07:36:50.195190 | orchestrator | 2025-09-23 07:36:50.195197 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-23 07:36:50.195203 | orchestrator | Tuesday 23 September 2025 07:35:29 +0000 (0:00:00.254) 0:00:57.864 ***** 2025-09-23 07:36:50.195209 | orchestrator | 2025-09-23 07:36:50.195215 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-23 07:36:50.195222 | orchestrator | Tuesday 23 September 2025 07:35:29 +0000 (0:00:00.053) 0:00:57.917 ***** 2025-09-23 07:36:50.195229 | orchestrator | 2025-09-23 07:36:50.195235 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-23 07:36:50.195250 | orchestrator | Tuesday 23 September 2025 07:35:29 +0000 (0:00:00.098) 0:00:58.016 ***** 2025-09-23 07:36:50.195257 | orchestrator | 2025-09-23 07:36:50.195263 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-09-23 07:36:50.195270 | orchestrator | Tuesday 23 September 2025 07:35:29 +0000 (0:00:00.085) 0:00:58.102 ***** 2025-09-23 07:36:50.195276 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:36:50.195283 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:36:50.195289 | orchestrator | changed: [testbed-manager] 2025-09-23 07:36:50.195295 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:36:50.195301 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:36:50.195308 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:36:50.195314 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:36:50.195320 | orchestrator | 2025-09-23 07:36:50.195326 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-09-23 07:36:50.195333 | orchestrator | Tuesday 23 September 2025 07:36:08 +0000 (0:00:39.023) 0:01:37.126 ***** 2025-09-23 07:36:50.195340 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:36:50.195347 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:36:50.195354 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:36:50.195361 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:36:50.195368 | orchestrator | changed: [testbed-manager] 2025-09-23 07:36:50.195375 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:36:50.195381 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:36:50.195388 | orchestrator | 2025-09-23 07:36:50.195396 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-09-23 07:36:50.195409 | orchestrator | Tuesday 23 September 2025 07:36:36 +0000 (0:00:27.882) 0:02:05.008 ***** 2025-09-23 07:36:50.195416 | orchestrator | ok: [testbed-manager] 2025-09-23 07:36:50.195423 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:36:50.195430 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:36:50.195444 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:36:50.195457 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:36:50.195464 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:36:50.195471 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:36:50.195478 | orchestrator | 2025-09-23 07:36:50.195485 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-09-23 07:36:50.195492 | orchestrator | Tuesday 23 September 2025 07:36:38 +0000 (0:00:01.838) 0:02:06.847 ***** 2025-09-23 07:36:50.195499 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:36:50.195505 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:36:50.195512 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:36:50.195518 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:36:50.195525 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:36:50.195532 | orchestrator | changed: [testbed-manager] 2025-09-23 07:36:50.195539 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:36:50.195546 | orchestrator | 2025-09-23 07:36:50.195552 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:36:50.195560 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-23 07:36:50.195567 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-23 07:36:50.195584 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-23 07:36:50.195598 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-23 07:36:50.195605 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-23 07:36:50.195617 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-23 07:36:50.195624 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-23 07:36:50.195630 | orchestrator | 2025-09-23 07:36:50.195637 | orchestrator | 2025-09-23 07:36:50.195644 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:36:50.195655 | orchestrator | Tuesday 23 September 2025 07:36:47 +0000 (0:00:09.259) 0:02:16.107 ***** 2025-09-23 07:36:50.195662 | orchestrator | =============================================================================== 2025-09-23 07:36:50.195668 | orchestrator | common : Restart fluentd container ------------------------------------- 39.02s 2025-09-23 07:36:50.195675 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 27.88s 2025-09-23 07:36:50.195681 | orchestrator | common : Restart cron container ----------------------------------------- 9.26s 2025-09-23 07:36:50.195688 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 6.44s 2025-09-23 07:36:50.195694 | orchestrator | common : Copying over config.json files for services -------------------- 4.85s 2025-09-23 07:36:50.195701 | orchestrator | common : Check common containers ---------------------------------------- 4.44s 2025-09-23 07:36:50.195708 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.19s 2025-09-23 07:36:50.195715 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.18s 2025-09-23 07:36:50.195722 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.55s 2025-09-23 07:36:50.195729 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.09s 2025-09-23 07:36:50.195735 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.93s 2025-09-23 07:36:50.195742 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.72s 2025-09-23 07:36:50.195748 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 2.53s 2025-09-23 07:36:50.195755 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.53s 2025-09-23 07:36:50.195762 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.15s 2025-09-23 07:36:50.195769 | orchestrator | common : Creating log volume -------------------------------------------- 1.92s 2025-09-23 07:36:50.195776 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.84s 2025-09-23 07:36:50.195782 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.75s 2025-09-23 07:36:50.195789 | orchestrator | common : include_tasks -------------------------------------------------- 1.49s 2025-09-23 07:36:50.195795 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.38s 2025-09-23 07:36:50.195802 | orchestrator | 2025-09-23 07:36:50 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:36:53.219877 | orchestrator | 2025-09-23 07:36:53 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:36:53.220194 | orchestrator | 2025-09-23 07:36:53 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:36:53.220722 | orchestrator | 2025-09-23 07:36:53 | INFO  | Task c4b971d4-8cc3-4cf6-bd1a-62c8a8d9947b is in state STARTED 2025-09-23 07:36:53.221233 | orchestrator | 2025-09-23 07:36:53 | INFO  | Task c3d524dd-8e7a-4152-826e-5383ea4d51e6 is in state STARTED 2025-09-23 07:36:53.222682 | orchestrator | 2025-09-23 07:36:53 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:36:53.223214 | orchestrator | 2025-09-23 07:36:53 | INFO  | Task 77691318-0af4-4a5f-a396-b827e7cfd8b1 is in state STARTED 2025-09-23 07:36:53.223249 | orchestrator | 2025-09-23 07:36:53 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:36:56.248651 | orchestrator | 2025-09-23 07:36:56 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:36:56.249429 | orchestrator | 2025-09-23 07:36:56 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:36:56.250621 | orchestrator | 2025-09-23 07:36:56 | INFO  | Task c4b971d4-8cc3-4cf6-bd1a-62c8a8d9947b is in state STARTED 2025-09-23 07:36:56.253937 | orchestrator | 2025-09-23 07:36:56 | INFO  | Task c3d524dd-8e7a-4152-826e-5383ea4d51e6 is in state STARTED 2025-09-23 07:36:56.254692 | orchestrator | 2025-09-23 07:36:56 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:36:56.255358 | orchestrator | 2025-09-23 07:36:56 | INFO  | Task 77691318-0af4-4a5f-a396-b827e7cfd8b1 is in state STARTED 2025-09-23 07:36:56.255387 | orchestrator | 2025-09-23 07:36:56 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:36:59.294204 | orchestrator | 2025-09-23 07:36:59 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:36:59.294322 | orchestrator | 2025-09-23 07:36:59 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:36:59.295180 | orchestrator | 2025-09-23 07:36:59 | INFO  | Task c4b971d4-8cc3-4cf6-bd1a-62c8a8d9947b is in state STARTED 2025-09-23 07:36:59.295930 | orchestrator | 2025-09-23 07:36:59 | INFO  | Task c3d524dd-8e7a-4152-826e-5383ea4d51e6 is in state STARTED 2025-09-23 07:36:59.296798 | orchestrator | 2025-09-23 07:36:59 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:36:59.297787 | orchestrator | 2025-09-23 07:36:59 | INFO  | Task 77691318-0af4-4a5f-a396-b827e7cfd8b1 is in state STARTED 2025-09-23 07:36:59.297820 | orchestrator | 2025-09-23 07:36:59 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:37:02.328880 | orchestrator | 2025-09-23 07:37:02 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:37:02.329287 | orchestrator | 2025-09-23 07:37:02 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:37:02.330774 | orchestrator | 2025-09-23 07:37:02 | INFO  | Task c4b971d4-8cc3-4cf6-bd1a-62c8a8d9947b is in state STARTED 2025-09-23 07:37:02.331784 | orchestrator | 2025-09-23 07:37:02 | INFO  | Task c3d524dd-8e7a-4152-826e-5383ea4d51e6 is in state STARTED 2025-09-23 07:37:02.332369 | orchestrator | 2025-09-23 07:37:02 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:37:02.333248 | orchestrator | 2025-09-23 07:37:02 | INFO  | Task 77691318-0af4-4a5f-a396-b827e7cfd8b1 is in state STARTED 2025-09-23 07:37:02.333277 | orchestrator | 2025-09-23 07:37:02 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:37:05.411940 | orchestrator | 2025-09-23 07:37:05 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:37:05.436681 | orchestrator | 2025-09-23 07:37:05 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:37:05.436756 | orchestrator | 2025-09-23 07:37:05 | INFO  | Task c4b971d4-8cc3-4cf6-bd1a-62c8a8d9947b is in state STARTED 2025-09-23 07:37:05.436792 | orchestrator | 2025-09-23 07:37:05 | INFO  | Task c3d524dd-8e7a-4152-826e-5383ea4d51e6 is in state STARTED 2025-09-23 07:37:05.436805 | orchestrator | 2025-09-23 07:37:05 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:37:05.436816 | orchestrator | 2025-09-23 07:37:05 | INFO  | Task 77691318-0af4-4a5f-a396-b827e7cfd8b1 is in state STARTED 2025-09-23 07:37:05.436828 | orchestrator | 2025-09-23 07:37:05 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:37:08.451323 | orchestrator | 2025-09-23 07:37:08 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:37:08.453336 | orchestrator | 2025-09-23 07:37:08 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:37:08.453659 | orchestrator | 2025-09-23 07:37:08 | INFO  | Task c4b971d4-8cc3-4cf6-bd1a-62c8a8d9947b is in state SUCCESS 2025-09-23 07:37:08.454326 | orchestrator | 2025-09-23 07:37:08 | INFO  | Task c3d524dd-8e7a-4152-826e-5383ea4d51e6 is in state STARTED 2025-09-23 07:37:08.454958 | orchestrator | 2025-09-23 07:37:08 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:37:08.455803 | orchestrator | 2025-09-23 07:37:08 | INFO  | Task 77691318-0af4-4a5f-a396-b827e7cfd8b1 is in state STARTED 2025-09-23 07:37:08.455834 | orchestrator | 2025-09-23 07:37:08 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:37:11.505911 | orchestrator | 2025-09-23 07:37:11 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:37:11.507612 | orchestrator | 2025-09-23 07:37:11 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:37:11.507657 | orchestrator | 2025-09-23 07:37:11 | INFO  | Task c3d524dd-8e7a-4152-826e-5383ea4d51e6 is in state STARTED 2025-09-23 07:37:11.510277 | orchestrator | 2025-09-23 07:37:11 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:37:11.510337 | orchestrator | 2025-09-23 07:37:11 | INFO  | Task 77691318-0af4-4a5f-a396-b827e7cfd8b1 is in state STARTED 2025-09-23 07:37:11.510358 | orchestrator | 2025-09-23 07:37:11 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:37:11.510377 | orchestrator | 2025-09-23 07:37:11 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:37:14.552168 | orchestrator | 2025-09-23 07:37:14 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:37:14.552527 | orchestrator | 2025-09-23 07:37:14 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:37:14.553317 | orchestrator | 2025-09-23 07:37:14 | INFO  | Task c3d524dd-8e7a-4152-826e-5383ea4d51e6 is in state STARTED 2025-09-23 07:37:14.553966 | orchestrator | 2025-09-23 07:37:14 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:37:14.554801 | orchestrator | 2025-09-23 07:37:14 | INFO  | Task 77691318-0af4-4a5f-a396-b827e7cfd8b1 is in state STARTED 2025-09-23 07:37:14.555291 | orchestrator | 2025-09-23 07:37:14 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:37:14.555396 | orchestrator | 2025-09-23 07:37:14 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:37:17.589974 | orchestrator | 2025-09-23 07:37:17 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:37:17.592215 | orchestrator | 2025-09-23 07:37:17 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:37:17.596617 | orchestrator | 2025-09-23 07:37:17 | INFO  | Task c3d524dd-8e7a-4152-826e-5383ea4d51e6 is in state SUCCESS 2025-09-23 07:37:17.596693 | orchestrator | 2025-09-23 07:37:17 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:37:17.596713 | orchestrator | 2025-09-23 07:37:17 | INFO  | Task 77691318-0af4-4a5f-a396-b827e7cfd8b1 is in state STARTED 2025-09-23 07:37:17.598290 | orchestrator | 2025-09-23 07:37:17.598323 | orchestrator | 2025-09-23 07:37:17.598334 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-23 07:37:17.598344 | orchestrator | 2025-09-23 07:37:17.598354 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-23 07:37:17.598364 | orchestrator | Tuesday 23 September 2025 07:36:53 +0000 (0:00:00.302) 0:00:00.302 ***** 2025-09-23 07:37:17.598374 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:37:17.598384 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:37:17.598393 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:37:17.598403 | orchestrator | 2025-09-23 07:37:17.598412 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-23 07:37:17.598422 | orchestrator | Tuesday 23 September 2025 07:36:53 +0000 (0:00:00.388) 0:00:00.691 ***** 2025-09-23 07:37:17.598432 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-09-23 07:37:17.598442 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-09-23 07:37:17.598451 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-09-23 07:37:17.598461 | orchestrator | 2025-09-23 07:37:17.598470 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-09-23 07:37:17.598480 | orchestrator | 2025-09-23 07:37:17.598489 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-09-23 07:37:17.598498 | orchestrator | Tuesday 23 September 2025 07:36:54 +0000 (0:00:00.694) 0:00:01.385 ***** 2025-09-23 07:37:17.598509 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:37:17.598519 | orchestrator | 2025-09-23 07:37:17.598529 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-09-23 07:37:17.598538 | orchestrator | Tuesday 23 September 2025 07:36:55 +0000 (0:00:00.846) 0:00:02.232 ***** 2025-09-23 07:37:17.598548 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-23 07:37:17.598557 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-23 07:37:17.598567 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-23 07:37:17.598576 | orchestrator | 2025-09-23 07:37:17.598586 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-09-23 07:37:17.598595 | orchestrator | Tuesday 23 September 2025 07:36:56 +0000 (0:00:00.980) 0:00:03.212 ***** 2025-09-23 07:37:17.598604 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-23 07:37:17.598614 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-23 07:37:17.598623 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-23 07:37:17.598633 | orchestrator | 2025-09-23 07:37:17.598642 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-09-23 07:37:17.598652 | orchestrator | Tuesday 23 September 2025 07:36:58 +0000 (0:00:02.152) 0:00:05.365 ***** 2025-09-23 07:37:17.598661 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:37:17.598671 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:37:17.598680 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:37:17.598689 | orchestrator | 2025-09-23 07:37:17.598699 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-09-23 07:37:17.598708 | orchestrator | Tuesday 23 September 2025 07:37:00 +0000 (0:00:02.143) 0:00:07.508 ***** 2025-09-23 07:37:17.598718 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:37:17.598727 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:37:17.598736 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:37:17.598746 | orchestrator | 2025-09-23 07:37:17.598755 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:37:17.598765 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:37:17.598789 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:37:17.598799 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:37:17.598808 | orchestrator | 2025-09-23 07:37:17.598818 | orchestrator | 2025-09-23 07:37:17.598827 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:37:17.598837 | orchestrator | Tuesday 23 September 2025 07:37:06 +0000 (0:00:06.347) 0:00:13.855 ***** 2025-09-23 07:37:17.598860 | orchestrator | =============================================================================== 2025-09-23 07:37:17.598869 | orchestrator | memcached : Restart memcached container --------------------------------- 6.35s 2025-09-23 07:37:17.598879 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.15s 2025-09-23 07:37:17.598888 | orchestrator | memcached : Check memcached container ----------------------------------- 2.14s 2025-09-23 07:37:17.598900 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.98s 2025-09-23 07:37:17.598911 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.85s 2025-09-23 07:37:17.598922 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.69s 2025-09-23 07:37:17.598933 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.39s 2025-09-23 07:37:17.598944 | orchestrator | 2025-09-23 07:37:17.598955 | orchestrator | 2025-09-23 07:37:17.598966 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-23 07:37:17.598977 | orchestrator | 2025-09-23 07:37:17.598988 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-23 07:37:17.598999 | orchestrator | Tuesday 23 September 2025 07:36:54 +0000 (0:00:00.365) 0:00:00.365 ***** 2025-09-23 07:37:17.599010 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:37:17.599040 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:37:17.599051 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:37:17.599062 | orchestrator | 2025-09-23 07:37:17.599073 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-23 07:37:17.599095 | orchestrator | Tuesday 23 September 2025 07:36:55 +0000 (0:00:00.779) 0:00:01.144 ***** 2025-09-23 07:37:17.599107 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-09-23 07:37:17.599118 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-09-23 07:37:17.599129 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-09-23 07:37:17.599140 | orchestrator | 2025-09-23 07:37:17.599151 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-09-23 07:37:17.599162 | orchestrator | 2025-09-23 07:37:17.599173 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-09-23 07:37:17.599189 | orchestrator | Tuesday 23 September 2025 07:36:56 +0000 (0:00:00.796) 0:00:01.941 ***** 2025-09-23 07:37:17.599208 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:37:17.599232 | orchestrator | 2025-09-23 07:37:17.599248 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-09-23 07:37:17.599263 | orchestrator | Tuesday 23 September 2025 07:36:57 +0000 (0:00:00.950) 0:00:02.891 ***** 2025-09-23 07:37:17.599281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-23 07:37:17.599317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-23 07:37:17.599336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-23 07:37:17.599361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-23 07:37:17.599379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-23 07:37:17.599399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-23 07:37:17.599409 | orchestrator | 2025-09-23 07:37:17.599419 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-09-23 07:37:17.599429 | orchestrator | Tuesday 23 September 2025 07:36:58 +0000 (0:00:01.620) 0:00:04.512 ***** 2025-09-23 07:37:17.599439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-23 07:37:17.599456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-23 07:37:17.599466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-23 07:37:17.599476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-23 07:37:17.599490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-23 07:37:17.599505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-23 07:37:17.599516 | orchestrator | 2025-09-23 07:37:17.599526 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-09-23 07:37:17.599535 | orchestrator | Tuesday 23 September 2025 07:37:01 +0000 (0:00:02.660) 0:00:07.172 ***** 2025-09-23 07:37:17.599546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-23 07:37:17.599561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-23 07:37:17.599572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-23 07:37:17.599582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-23 07:37:17.599596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-23 07:37:17.599606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-23 07:37:17.599616 | orchestrator | 2025-09-23 07:37:17.599631 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-09-23 07:37:17.599641 | orchestrator | Tuesday 23 September 2025 07:37:03 +0000 (0:00:02.450) 0:00:09.623 ***** 2025-09-23 07:37:17.599650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-23 07:37:17.599666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-23 07:37:17.599676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-23 07:37:17.599686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-23 07:37:17.599709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-23 07:37:17.599734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-23 07:37:17.599754 | orchestrator | 2025-09-23 07:37:17.599770 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-23 07:37:17.599785 | orchestrator | Tuesday 23 September 2025 07:37:05 +0000 (0:00:01.774) 0:00:11.398 ***** 2025-09-23 07:37:17.599800 | orchestrator | 2025-09-23 07:37:17.599816 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-23 07:37:17.599841 | orchestrator | Tuesday 23 September 2025 07:37:06 +0000 (0:00:00.266) 0:00:11.664 ***** 2025-09-23 07:37:17.599858 | orchestrator | 2025-09-23 07:37:17.599874 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-23 07:37:17.599901 | orchestrator | Tuesday 23 September 2025 07:37:06 +0000 (0:00:00.236) 0:00:11.901 ***** 2025-09-23 07:37:17.599918 | orchestrator | 2025-09-23 07:37:17.599934 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-09-23 07:37:17.599950 | orchestrator | Tuesday 23 September 2025 07:37:06 +0000 (0:00:00.116) 0:00:12.017 ***** 2025-09-23 07:37:17.599966 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:37:17.599982 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:37:17.599997 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:37:17.600012 | orchestrator | 2025-09-23 07:37:17.600104 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-09-23 07:37:17.600121 | orchestrator | Tuesday 23 September 2025 07:37:09 +0000 (0:00:03.537) 0:00:15.554 ***** 2025-09-23 07:37:17.600138 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:37:17.600153 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:37:17.600169 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:37:17.600186 | orchestrator | 2025-09-23 07:37:17.600202 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:37:17.600218 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:37:17.600235 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:37:17.600251 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:37:17.600267 | orchestrator | 2025-09-23 07:37:17.600283 | orchestrator | 2025-09-23 07:37:17.600300 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:37:17.600317 | orchestrator | Tuesday 23 September 2025 07:37:14 +0000 (0:00:04.958) 0:00:20.513 ***** 2025-09-23 07:37:17.600334 | orchestrator | =============================================================================== 2025-09-23 07:37:17.600351 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 4.96s 2025-09-23 07:37:17.600368 | orchestrator | redis : Restart redis container ----------------------------------------- 3.54s 2025-09-23 07:37:17.600385 | orchestrator | redis : Copying over default config.json files -------------------------- 2.66s 2025-09-23 07:37:17.600402 | orchestrator | redis : Copying over redis config files --------------------------------- 2.45s 2025-09-23 07:37:17.600418 | orchestrator | redis : Check redis containers ------------------------------------------ 1.78s 2025-09-23 07:37:17.600435 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.62s 2025-09-23 07:37:17.600452 | orchestrator | redis : include_tasks --------------------------------------------------- 0.95s 2025-09-23 07:37:17.600467 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.80s 2025-09-23 07:37:17.600483 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.78s 2025-09-23 07:37:17.600499 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.62s 2025-09-23 07:37:17.600517 | orchestrator | 2025-09-23 07:37:17 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:37:17.600533 | orchestrator | 2025-09-23 07:37:17 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:37:20.626660 | orchestrator | 2025-09-23 07:37:20 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:37:20.630377 | orchestrator | 2025-09-23 07:37:20 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:37:20.632842 | orchestrator | 2025-09-23 07:37:20 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:37:20.633422 | orchestrator | 2025-09-23 07:37:20 | INFO  | Task 77691318-0af4-4a5f-a396-b827e7cfd8b1 is in state STARTED 2025-09-23 07:37:20.635245 | orchestrator | 2025-09-23 07:37:20 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:37:20.635371 | orchestrator | 2025-09-23 07:37:20 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:37:23.681106 | orchestrator | 2025-09-23 07:37:23 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:37:23.682575 | orchestrator | 2025-09-23 07:37:23 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:37:23.685233 | orchestrator | 2025-09-23 07:37:23 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:37:23.686238 | orchestrator | 2025-09-23 07:37:23 | INFO  | Task 77691318-0af4-4a5f-a396-b827e7cfd8b1 is in state STARTED 2025-09-23 07:37:23.687370 | orchestrator | 2025-09-23 07:37:23 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:37:23.687465 | orchestrator | 2025-09-23 07:37:23 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:37:26.725418 | orchestrator | 2025-09-23 07:37:26 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:37:26.725501 | orchestrator | 2025-09-23 07:37:26 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:37:26.725632 | orchestrator | 2025-09-23 07:37:26 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:37:26.726483 | orchestrator | 2025-09-23 07:37:26 | INFO  | Task 77691318-0af4-4a5f-a396-b827e7cfd8b1 is in state STARTED 2025-09-23 07:37:26.728091 | orchestrator | 2025-09-23 07:37:26 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:37:26.728142 | orchestrator | 2025-09-23 07:37:26 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:37:29.781637 | orchestrator | 2025-09-23 07:37:29 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:37:29.781869 | orchestrator | 2025-09-23 07:37:29 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:37:29.782488 | orchestrator | 2025-09-23 07:37:29 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:37:29.786240 | orchestrator | 2025-09-23 07:37:29 | INFO  | Task 77691318-0af4-4a5f-a396-b827e7cfd8b1 is in state STARTED 2025-09-23 07:37:29.786853 | orchestrator | 2025-09-23 07:37:29 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:37:29.786875 | orchestrator | 2025-09-23 07:37:29 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:37:32.826927 | orchestrator | 2025-09-23 07:37:32 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:37:32.827304 | orchestrator | 2025-09-23 07:37:32 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:37:32.828034 | orchestrator | 2025-09-23 07:37:32 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:37:32.829682 | orchestrator | 2025-09-23 07:37:32 | INFO  | Task 77691318-0af4-4a5f-a396-b827e7cfd8b1 is in state STARTED 2025-09-23 07:37:32.830388 | orchestrator | 2025-09-23 07:37:32 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:37:32.832443 | orchestrator | 2025-09-23 07:37:32 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:37:35.965091 | orchestrator | 2025-09-23 07:37:35 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:37:35.966397 | orchestrator | 2025-09-23 07:37:35 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:37:35.968050 | orchestrator | 2025-09-23 07:37:35 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:37:35.969559 | orchestrator | 2025-09-23 07:37:35 | INFO  | Task 77691318-0af4-4a5f-a396-b827e7cfd8b1 is in state STARTED 2025-09-23 07:37:35.971149 | orchestrator | 2025-09-23 07:37:35 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:37:35.971537 | orchestrator | 2025-09-23 07:37:35 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:37:39.004760 | orchestrator | 2025-09-23 07:37:39 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:37:39.007643 | orchestrator | 2025-09-23 07:37:39 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:37:39.011118 | orchestrator | 2025-09-23 07:37:39 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:37:39.012113 | orchestrator | 2025-09-23 07:37:39 | INFO  | Task 77691318-0af4-4a5f-a396-b827e7cfd8b1 is in state STARTED 2025-09-23 07:37:39.014300 | orchestrator | 2025-09-23 07:37:39 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:37:39.014418 | orchestrator | 2025-09-23 07:37:39 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:37:42.051077 | orchestrator | 2025-09-23 07:37:42 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:37:42.051182 | orchestrator | 2025-09-23 07:37:42 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:37:42.051638 | orchestrator | 2025-09-23 07:37:42 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:37:42.054201 | orchestrator | 2025-09-23 07:37:42 | INFO  | Task 77691318-0af4-4a5f-a396-b827e7cfd8b1 is in state STARTED 2025-09-23 07:37:42.055224 | orchestrator | 2025-09-23 07:37:42 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:37:42.055276 | orchestrator | 2025-09-23 07:37:42 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:37:45.100723 | orchestrator | 2025-09-23 07:37:45 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:37:45.103520 | orchestrator | 2025-09-23 07:37:45 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:37:45.104204 | orchestrator | 2025-09-23 07:37:45 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:37:45.104934 | orchestrator | 2025-09-23 07:37:45 | INFO  | Task 77691318-0af4-4a5f-a396-b827e7cfd8b1 is in state STARTED 2025-09-23 07:37:45.105912 | orchestrator | 2025-09-23 07:37:45 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:37:45.105951 | orchestrator | 2025-09-23 07:37:45 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:37:48.214887 | orchestrator | 2025-09-23 07:37:48 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:37:48.215246 | orchestrator | 2025-09-23 07:37:48 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:37:48.215620 | orchestrator | 2025-09-23 07:37:48 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:37:48.216038 | orchestrator | 2025-09-23 07:37:48 | INFO  | Task 77691318-0af4-4a5f-a396-b827e7cfd8b1 is in state STARTED 2025-09-23 07:37:48.216591 | orchestrator | 2025-09-23 07:37:48 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:37:48.216712 | orchestrator | 2025-09-23 07:37:48 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:37:51.450094 | orchestrator | 2025-09-23 07:37:51 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:37:51.450229 | orchestrator | 2025-09-23 07:37:51 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:37:51.450247 | orchestrator | 2025-09-23 07:37:51 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:37:51.450259 | orchestrator | 2025-09-23 07:37:51 | INFO  | Task 77691318-0af4-4a5f-a396-b827e7cfd8b1 is in state STARTED 2025-09-23 07:37:51.450270 | orchestrator | 2025-09-23 07:37:51 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:37:51.450281 | orchestrator | 2025-09-23 07:37:51 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:37:54.486165 | orchestrator | 2025-09-23 07:37:54 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:37:54.486266 | orchestrator | 2025-09-23 07:37:54 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:37:54.486282 | orchestrator | 2025-09-23 07:37:54 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state STARTED 2025-09-23 07:37:54.488392 | orchestrator | 2025-09-23 07:37:54 | INFO  | Task 77691318-0af4-4a5f-a396-b827e7cfd8b1 is in state STARTED 2025-09-23 07:37:54.489085 | orchestrator | 2025-09-23 07:37:54 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:37:54.489238 | orchestrator | 2025-09-23 07:37:54 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:37:57.519178 | orchestrator | 2025-09-23 07:37:57 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:37:57.521447 | orchestrator | 2025-09-23 07:37:57 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:37:57.523870 | orchestrator | 2025-09-23 07:37:57 | INFO  | Task 7cc8a538-0a08-4209-86d5-3b76ddf324e9 is in state SUCCESS 2025-09-23 07:37:57.526187 | orchestrator | 2025-09-23 07:37:57.526212 | orchestrator | 2025-09-23 07:37:57.526223 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-09-23 07:37:57.526232 | orchestrator | 2025-09-23 07:37:57.526239 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-09-23 07:37:57.526247 | orchestrator | Tuesday 23 September 2025 07:34:32 +0000 (0:00:00.170) 0:00:00.170 ***** 2025-09-23 07:37:57.526254 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:37:57.526261 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:37:57.526268 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:37:57.526275 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:37:57.526282 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:37:57.526289 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:37:57.526296 | orchestrator | 2025-09-23 07:37:57.526303 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-09-23 07:37:57.526311 | orchestrator | Tuesday 23 September 2025 07:34:32 +0000 (0:00:00.543) 0:00:00.714 ***** 2025-09-23 07:37:57.526318 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:37:57.526326 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:37:57.526333 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:37:57.526340 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.526347 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:37:57.526354 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:37:57.526361 | orchestrator | 2025-09-23 07:37:57.526368 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-09-23 07:37:57.526375 | orchestrator | Tuesday 23 September 2025 07:34:33 +0000 (0:00:00.529) 0:00:01.243 ***** 2025-09-23 07:37:57.526382 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:37:57.526389 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:37:57.526396 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:37:57.526403 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.526411 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:37:57.526430 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:37:57.526437 | orchestrator | 2025-09-23 07:37:57.526444 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-09-23 07:37:57.526452 | orchestrator | Tuesday 23 September 2025 07:34:34 +0000 (0:00:00.570) 0:00:01.814 ***** 2025-09-23 07:37:57.526458 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:37:57.526465 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:37:57.526472 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:37:57.526479 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:37:57.526506 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:37:57.526513 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:37:57.526520 | orchestrator | 2025-09-23 07:37:57.526527 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-09-23 07:37:57.526534 | orchestrator | Tuesday 23 September 2025 07:34:36 +0000 (0:00:01.958) 0:00:03.772 ***** 2025-09-23 07:37:57.526541 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:37:57.526548 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:37:57.526555 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:37:57.526562 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:37:57.526569 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:37:57.526576 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:37:57.526583 | orchestrator | 2025-09-23 07:37:57.526590 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-09-23 07:37:57.526597 | orchestrator | Tuesday 23 September 2025 07:34:37 +0000 (0:00:01.167) 0:00:04.939 ***** 2025-09-23 07:37:57.526604 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:37:57.526611 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:37:57.526618 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:37:57.526625 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:37:57.526632 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:37:57.526639 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:37:57.526645 | orchestrator | 2025-09-23 07:37:57.526652 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-09-23 07:37:57.526659 | orchestrator | Tuesday 23 September 2025 07:34:38 +0000 (0:00:01.053) 0:00:05.993 ***** 2025-09-23 07:37:57.526666 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:37:57.526673 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:37:57.526680 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:37:57.526687 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.526694 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:37:57.526701 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:37:57.526708 | orchestrator | 2025-09-23 07:37:57.526715 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-09-23 07:37:57.526722 | orchestrator | Tuesday 23 September 2025 07:34:38 +0000 (0:00:00.425) 0:00:06.418 ***** 2025-09-23 07:37:57.526729 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:37:57.526736 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:37:57.526743 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:37:57.526750 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.526757 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:37:57.526764 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:37:57.526771 | orchestrator | 2025-09-23 07:37:57.526778 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-09-23 07:37:57.526785 | orchestrator | Tuesday 23 September 2025 07:34:39 +0000 (0:00:00.697) 0:00:07.116 ***** 2025-09-23 07:37:57.526792 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-23 07:37:57.526799 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-23 07:37:57.526806 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:37:57.526813 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-23 07:37:57.526820 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-23 07:37:57.526832 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:37:57.526840 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-23 07:37:57.526847 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-23 07:37:57.526855 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:37:57.526863 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-23 07:37:57.526882 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-23 07:37:57.526890 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.526898 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-23 07:37:57.526905 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-23 07:37:57.526913 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:37:57.526920 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-23 07:37:57.526927 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-23 07:37:57.526933 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:37:57.526939 | orchestrator | 2025-09-23 07:37:57.526946 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-09-23 07:37:57.526953 | orchestrator | Tuesday 23 September 2025 07:34:40 +0000 (0:00:00.896) 0:00:08.013 ***** 2025-09-23 07:37:57.526960 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:37:57.526967 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:37:57.526974 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.526981 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:37:57.527003 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:37:57.527010 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:37:57.527015 | orchestrator | 2025-09-23 07:37:57.527021 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-09-23 07:37:57.527026 | orchestrator | Tuesday 23 September 2025 07:34:41 +0000 (0:00:01.615) 0:00:09.628 ***** 2025-09-23 07:37:57.527032 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:37:57.527037 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:37:57.527042 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:37:57.527047 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:37:57.527053 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:37:57.527058 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:37:57.527064 | orchestrator | 2025-09-23 07:37:57.527070 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-09-23 07:37:57.527076 | orchestrator | Tuesday 23 September 2025 07:34:42 +0000 (0:00:00.818) 0:00:10.447 ***** 2025-09-23 07:37:57.527083 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:37:57.527090 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:37:57.527097 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:37:57.527104 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:37:57.527111 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:37:57.527117 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:37:57.527125 | orchestrator | 2025-09-23 07:37:57.527132 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-09-23 07:37:57.527139 | orchestrator | Tuesday 23 September 2025 07:34:48 +0000 (0:00:05.434) 0:00:15.881 ***** 2025-09-23 07:37:57.527146 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:37:57.527153 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:37:57.527160 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:37:57.527167 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.527174 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:37:57.527181 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:37:57.527188 | orchestrator | 2025-09-23 07:37:57.527195 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-09-23 07:37:57.527202 | orchestrator | Tuesday 23 September 2025 07:34:49 +0000 (0:00:01.201) 0:00:17.083 ***** 2025-09-23 07:37:57.527215 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:37:57.527222 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:37:57.527229 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:37:57.527236 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.527243 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:37:57.527250 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:37:57.527258 | orchestrator | 2025-09-23 07:37:57.527265 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-09-23 07:37:57.527273 | orchestrator | Tuesday 23 September 2025 07:34:52 +0000 (0:00:02.930) 0:00:20.013 ***** 2025-09-23 07:37:57.527280 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:37:57.527287 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:37:57.527294 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:37:57.527301 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:37:57.527308 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:37:57.527315 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:37:57.527322 | orchestrator | 2025-09-23 07:37:57.527329 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-09-23 07:37:57.527336 | orchestrator | Tuesday 23 September 2025 07:34:53 +0000 (0:00:01.450) 0:00:21.464 ***** 2025-09-23 07:37:57.527343 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-09-23 07:37:57.527350 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-09-23 07:37:57.527357 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-09-23 07:37:57.527364 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-09-23 07:37:57.527371 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-09-23 07:37:57.527378 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-09-23 07:37:57.527385 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-09-23 07:37:57.527392 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-09-23 07:37:57.527399 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-09-23 07:37:57.527406 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-09-23 07:37:57.527413 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-09-23 07:37:57.527420 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-09-23 07:37:57.527427 | orchestrator | 2025-09-23 07:37:57.527434 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-09-23 07:37:57.527441 | orchestrator | Tuesday 23 September 2025 07:34:55 +0000 (0:00:02.189) 0:00:23.654 ***** 2025-09-23 07:37:57.527448 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:37:57.527455 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:37:57.527462 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:37:57.527470 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:37:57.527476 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:37:57.527486 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:37:57.527494 | orchestrator | 2025-09-23 07:37:57.527506 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-09-23 07:37:57.527513 | orchestrator | 2025-09-23 07:37:57.527520 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-09-23 07:37:57.527527 | orchestrator | Tuesday 23 September 2025 07:34:57 +0000 (0:00:02.014) 0:00:25.668 ***** 2025-09-23 07:37:57.527534 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:37:57.527541 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:37:57.527548 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:37:57.527555 | orchestrator | 2025-09-23 07:37:57.527562 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-09-23 07:37:57.527570 | orchestrator | Tuesday 23 September 2025 07:34:58 +0000 (0:00:00.838) 0:00:26.506 ***** 2025-09-23 07:37:57.527577 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:37:57.527583 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:37:57.527590 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:37:57.527597 | orchestrator | 2025-09-23 07:37:57.527604 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-09-23 07:37:57.527615 | orchestrator | Tuesday 23 September 2025 07:35:00 +0000 (0:00:01.308) 0:00:27.815 ***** 2025-09-23 07:37:57.527622 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:37:57.527629 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:37:57.527636 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:37:57.527643 | orchestrator | 2025-09-23 07:37:57.527650 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-09-23 07:37:57.527657 | orchestrator | Tuesday 23 September 2025 07:35:01 +0000 (0:00:00.955) 0:00:28.771 ***** 2025-09-23 07:37:57.527664 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:37:57.527671 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:37:57.527678 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:37:57.527685 | orchestrator | 2025-09-23 07:37:57.527692 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-09-23 07:37:57.527699 | orchestrator | Tuesday 23 September 2025 07:35:01 +0000 (0:00:00.883) 0:00:29.654 ***** 2025-09-23 07:37:57.527706 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.527714 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:37:57.527721 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:37:57.527728 | orchestrator | 2025-09-23 07:37:57.527735 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-09-23 07:37:57.527742 | orchestrator | Tuesday 23 September 2025 07:35:02 +0000 (0:00:00.316) 0:00:29.971 ***** 2025-09-23 07:37:57.527749 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:37:57.527756 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:37:57.527763 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:37:57.527770 | orchestrator | 2025-09-23 07:37:57.527777 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-09-23 07:37:57.527783 | orchestrator | Tuesday 23 September 2025 07:35:02 +0000 (0:00:00.746) 0:00:30.717 ***** 2025-09-23 07:37:57.527790 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:37:57.527798 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:37:57.527805 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:37:57.527812 | orchestrator | 2025-09-23 07:37:57.527819 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-09-23 07:37:57.527826 | orchestrator | Tuesday 23 September 2025 07:35:04 +0000 (0:00:01.377) 0:00:32.095 ***** 2025-09-23 07:37:57.527833 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:37:57.527840 | orchestrator | 2025-09-23 07:37:57.527847 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-09-23 07:37:57.527854 | orchestrator | Tuesday 23 September 2025 07:35:05 +0000 (0:00:00.698) 0:00:32.794 ***** 2025-09-23 07:37:57.527861 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:37:57.527868 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:37:57.527875 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:37:57.527882 | orchestrator | 2025-09-23 07:37:57.527889 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-09-23 07:37:57.527896 | orchestrator | Tuesday 23 September 2025 07:35:07 +0000 (0:00:02.406) 0:00:35.200 ***** 2025-09-23 07:37:57.527903 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:37:57.527910 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:37:57.527917 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:37:57.527924 | orchestrator | 2025-09-23 07:37:57.527930 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-09-23 07:37:57.527935 | orchestrator | Tuesday 23 September 2025 07:35:08 +0000 (0:00:00.596) 0:00:35.797 ***** 2025-09-23 07:37:57.527942 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:37:57.527949 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:37:57.527956 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:37:57.527963 | orchestrator | 2025-09-23 07:37:57.527970 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-09-23 07:37:57.527977 | orchestrator | Tuesday 23 September 2025 07:35:09 +0000 (0:00:01.013) 0:00:36.811 ***** 2025-09-23 07:37:57.527998 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:37:57.528006 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:37:57.528013 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:37:57.528020 | orchestrator | 2025-09-23 07:37:57.528027 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-09-23 07:37:57.528034 | orchestrator | Tuesday 23 September 2025 07:35:10 +0000 (0:00:01.492) 0:00:38.303 ***** 2025-09-23 07:37:57.528041 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.528048 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:37:57.528055 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:37:57.528063 | orchestrator | 2025-09-23 07:37:57.528070 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-09-23 07:37:57.528077 | orchestrator | Tuesday 23 September 2025 07:35:11 +0000 (0:00:00.430) 0:00:38.734 ***** 2025-09-23 07:37:57.528084 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.528091 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:37:57.528098 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:37:57.528105 | orchestrator | 2025-09-23 07:37:57.528112 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-09-23 07:37:57.528119 | orchestrator | Tuesday 23 September 2025 07:35:11 +0000 (0:00:00.470) 0:00:39.204 ***** 2025-09-23 07:37:57.528126 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:37:57.528136 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:37:57.528143 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:37:57.528150 | orchestrator | 2025-09-23 07:37:57.528161 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-09-23 07:37:57.528168 | orchestrator | Tuesday 23 September 2025 07:35:14 +0000 (0:00:02.579) 0:00:41.783 ***** 2025-09-23 07:37:57.528176 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-23 07:37:57.528183 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-23 07:37:57.528190 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-23 07:37:57.528197 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-23 07:37:57.528205 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-23 07:37:57.528212 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-23 07:37:57.528219 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-23 07:37:57.528226 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-23 07:37:57.528233 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-23 07:37:57.528240 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-23 07:37:57.528247 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-23 07:37:57.528254 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-23 07:37:57.528261 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:37:57.528268 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:37:57.528275 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:37:57.528282 | orchestrator | 2025-09-23 07:37:57.528295 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-09-23 07:37:57.528302 | orchestrator | Tuesday 23 September 2025 07:35:58 +0000 (0:00:44.408) 0:01:26.192 ***** 2025-09-23 07:37:57.528310 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.528316 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:37:57.528323 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:37:57.528331 | orchestrator | 2025-09-23 07:37:57.528338 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-09-23 07:37:57.528345 | orchestrator | Tuesday 23 September 2025 07:35:58 +0000 (0:00:00.264) 0:01:26.457 ***** 2025-09-23 07:37:57.528352 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:37:57.528359 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:37:57.528366 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:37:57.528373 | orchestrator | 2025-09-23 07:37:57.528381 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-09-23 07:37:57.528388 | orchestrator | Tuesday 23 September 2025 07:35:59 +0000 (0:00:00.914) 0:01:27.371 ***** 2025-09-23 07:37:57.528395 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:37:57.528402 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:37:57.528409 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:37:57.528416 | orchestrator | 2025-09-23 07:37:57.528423 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-09-23 07:37:57.528430 | orchestrator | Tuesday 23 September 2025 07:36:00 +0000 (0:00:01.352) 0:01:28.724 ***** 2025-09-23 07:37:57.528437 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:37:57.528444 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:37:57.528452 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:37:57.528459 | orchestrator | 2025-09-23 07:37:57.528466 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-09-23 07:37:57.528473 | orchestrator | Tuesday 23 September 2025 07:36:25 +0000 (0:00:24.918) 0:01:53.642 ***** 2025-09-23 07:37:57.528480 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:37:57.528487 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:37:57.528494 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:37:57.528501 | orchestrator | 2025-09-23 07:37:57.528508 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-09-23 07:37:57.528515 | orchestrator | Tuesday 23 September 2025 07:36:26 +0000 (0:00:00.677) 0:01:54.319 ***** 2025-09-23 07:37:57.528522 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:37:57.528529 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:37:57.528536 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:37:57.528543 | orchestrator | 2025-09-23 07:37:57.528551 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-09-23 07:37:57.528558 | orchestrator | Tuesday 23 September 2025 07:36:27 +0000 (0:00:00.649) 0:01:54.969 ***** 2025-09-23 07:37:57.528565 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:37:57.528572 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:37:57.528579 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:37:57.528586 | orchestrator | 2025-09-23 07:37:57.528593 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-09-23 07:37:57.528603 | orchestrator | Tuesday 23 September 2025 07:36:27 +0000 (0:00:00.653) 0:01:55.622 ***** 2025-09-23 07:37:57.528611 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:37:57.528621 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:37:57.528648 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:37:57.528656 | orchestrator | 2025-09-23 07:37:57.528663 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-09-23 07:37:57.528670 | orchestrator | Tuesday 23 September 2025 07:36:28 +0000 (0:00:00.813) 0:01:56.435 ***** 2025-09-23 07:37:57.528677 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:37:57.528685 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:37:57.528692 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:37:57.528699 | orchestrator | 2025-09-23 07:37:57.528706 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-09-23 07:37:57.528718 | orchestrator | Tuesday 23 September 2025 07:36:29 +0000 (0:00:00.317) 0:01:56.753 ***** 2025-09-23 07:37:57.528726 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:37:57.528748 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:37:57.528756 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:37:57.528763 | orchestrator | 2025-09-23 07:37:57.528770 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-09-23 07:37:57.528777 | orchestrator | Tuesday 23 September 2025 07:36:29 +0000 (0:00:00.628) 0:01:57.382 ***** 2025-09-23 07:37:57.528784 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:37:57.528791 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:37:57.528798 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:37:57.528805 | orchestrator | 2025-09-23 07:37:57.528812 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-09-23 07:37:57.528819 | orchestrator | Tuesday 23 September 2025 07:36:30 +0000 (0:00:00.758) 0:01:58.140 ***** 2025-09-23 07:37:57.528826 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:37:57.528833 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:37:57.528840 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:37:57.528847 | orchestrator | 2025-09-23 07:37:57.528854 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-09-23 07:37:57.528861 | orchestrator | Tuesday 23 September 2025 07:36:31 +0000 (0:00:01.221) 0:01:59.362 ***** 2025-09-23 07:37:57.528869 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:37:57.528876 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:37:57.528883 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:37:57.528890 | orchestrator | 2025-09-23 07:37:57.528897 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-09-23 07:37:57.528904 | orchestrator | Tuesday 23 September 2025 07:36:32 +0000 (0:00:00.904) 0:02:00.267 ***** 2025-09-23 07:37:57.528911 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.528918 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:37:57.528924 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:37:57.528931 | orchestrator | 2025-09-23 07:37:57.528937 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-09-23 07:37:57.528944 | orchestrator | Tuesday 23 September 2025 07:36:32 +0000 (0:00:00.287) 0:02:00.555 ***** 2025-09-23 07:37:57.528951 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.528958 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:37:57.528965 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:37:57.528973 | orchestrator | 2025-09-23 07:37:57.528980 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-09-23 07:37:57.529008 | orchestrator | Tuesday 23 September 2025 07:36:33 +0000 (0:00:00.299) 0:02:00.854 ***** 2025-09-23 07:37:57.529017 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:37:57.529024 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:37:57.529031 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:37:57.529038 | orchestrator | 2025-09-23 07:37:57.529045 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-09-23 07:37:57.529052 | orchestrator | Tuesday 23 September 2025 07:36:33 +0000 (0:00:00.790) 0:02:01.644 ***** 2025-09-23 07:37:57.529059 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:37:57.529066 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:37:57.529073 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:37:57.529080 | orchestrator | 2025-09-23 07:37:57.529087 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-09-23 07:37:57.529095 | orchestrator | Tuesday 23 September 2025 07:36:34 +0000 (0:00:00.595) 0:02:02.239 ***** 2025-09-23 07:37:57.529102 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-23 07:37:57.529109 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-23 07:37:57.529116 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-23 07:37:57.529127 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-23 07:37:57.529134 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-23 07:37:57.529141 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-23 07:37:57.529149 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-23 07:37:57.529156 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-23 07:37:57.529163 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-23 07:37:57.529170 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-09-23 07:37:57.529177 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-23 07:37:57.529184 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-23 07:37:57.529191 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-09-23 07:37:57.529205 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-23 07:37:57.529212 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-23 07:37:57.529219 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-23 07:37:57.529226 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-23 07:37:57.529233 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-23 07:37:57.529240 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-23 07:37:57.529247 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-23 07:37:57.529255 | orchestrator | 2025-09-23 07:37:57.529262 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-09-23 07:37:57.529269 | orchestrator | 2025-09-23 07:37:57.529276 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-09-23 07:37:57.529282 | orchestrator | Tuesday 23 September 2025 07:36:37 +0000 (0:00:02.761) 0:02:05.001 ***** 2025-09-23 07:37:57.529290 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:37:57.529297 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:37:57.529304 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:37:57.529311 | orchestrator | 2025-09-23 07:37:57.529318 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-09-23 07:37:57.529325 | orchestrator | Tuesday 23 September 2025 07:36:37 +0000 (0:00:00.401) 0:02:05.403 ***** 2025-09-23 07:37:57.529332 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:37:57.529339 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:37:57.529346 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:37:57.529353 | orchestrator | 2025-09-23 07:37:57.529360 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-09-23 07:37:57.529367 | orchestrator | Tuesday 23 September 2025 07:36:38 +0000 (0:00:00.633) 0:02:06.037 ***** 2025-09-23 07:37:57.529374 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:37:57.529381 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:37:57.529388 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:37:57.529395 | orchestrator | 2025-09-23 07:37:57.529402 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-09-23 07:37:57.529409 | orchestrator | Tuesday 23 September 2025 07:36:38 +0000 (0:00:00.272) 0:02:06.310 ***** 2025-09-23 07:37:57.529416 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:37:57.529423 | orchestrator | 2025-09-23 07:37:57.529430 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-09-23 07:37:57.529441 | orchestrator | Tuesday 23 September 2025 07:36:39 +0000 (0:00:00.586) 0:02:06.896 ***** 2025-09-23 07:37:57.529448 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:37:57.529455 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:37:57.529462 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:37:57.529469 | orchestrator | 2025-09-23 07:37:57.529476 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-09-23 07:37:57.529483 | orchestrator | Tuesday 23 September 2025 07:36:39 +0000 (0:00:00.308) 0:02:07.204 ***** 2025-09-23 07:37:57.529491 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:37:57.529498 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:37:57.529505 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:37:57.529512 | orchestrator | 2025-09-23 07:37:57.529519 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-09-23 07:37:57.529526 | orchestrator | Tuesday 23 September 2025 07:36:39 +0000 (0:00:00.258) 0:02:07.463 ***** 2025-09-23 07:37:57.529533 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:37:57.529539 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:37:57.529546 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:37:57.529553 | orchestrator | 2025-09-23 07:37:57.529560 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-09-23 07:37:57.529567 | orchestrator | Tuesday 23 September 2025 07:36:40 +0000 (0:00:00.318) 0:02:07.781 ***** 2025-09-23 07:37:57.529574 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:37:57.529581 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:37:57.529588 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:37:57.529596 | orchestrator | 2025-09-23 07:37:57.529603 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-09-23 07:37:57.529610 | orchestrator | Tuesday 23 September 2025 07:36:40 +0000 (0:00:00.760) 0:02:08.542 ***** 2025-09-23 07:37:57.529617 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:37:57.529624 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:37:57.529631 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:37:57.529638 | orchestrator | 2025-09-23 07:37:57.529645 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-09-23 07:37:57.529652 | orchestrator | Tuesday 23 September 2025 07:36:41 +0000 (0:00:00.960) 0:02:09.502 ***** 2025-09-23 07:37:57.529659 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:37:57.529666 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:37:57.529673 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:37:57.529680 | orchestrator | 2025-09-23 07:37:57.529687 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-09-23 07:37:57.529694 | orchestrator | Tuesday 23 September 2025 07:36:42 +0000 (0:00:01.143) 0:02:10.646 ***** 2025-09-23 07:37:57.529701 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:37:57.529708 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:37:57.529715 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:37:57.529722 | orchestrator | 2025-09-23 07:37:57.529729 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-23 07:37:57.529737 | orchestrator | 2025-09-23 07:37:57.529744 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-23 07:37:57.529751 | orchestrator | Tuesday 23 September 2025 07:36:54 +0000 (0:00:12.016) 0:02:22.663 ***** 2025-09-23 07:37:57.529758 | orchestrator | ok: [testbed-manager] 2025-09-23 07:37:57.529765 | orchestrator | 2025-09-23 07:37:57.529772 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-23 07:37:57.529784 | orchestrator | Tuesday 23 September 2025 07:36:55 +0000 (0:00:00.789) 0:02:23.452 ***** 2025-09-23 07:37:57.529795 | orchestrator | changed: [testbed-manager] 2025-09-23 07:37:57.529802 | orchestrator | 2025-09-23 07:37:57.529809 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-23 07:37:57.529816 | orchestrator | Tuesday 23 September 2025 07:36:56 +0000 (0:00:00.390) 0:02:23.842 ***** 2025-09-23 07:37:57.529823 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-23 07:37:57.529833 | orchestrator | 2025-09-23 07:37:57.529840 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-23 07:37:57.529847 | orchestrator | Tuesday 23 September 2025 07:36:56 +0000 (0:00:00.505) 0:02:24.348 ***** 2025-09-23 07:37:57.529855 | orchestrator | changed: [testbed-manager] 2025-09-23 07:37:57.529862 | orchestrator | 2025-09-23 07:37:57.529869 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-23 07:37:57.529876 | orchestrator | Tuesday 23 September 2025 07:36:57 +0000 (0:00:00.774) 0:02:25.122 ***** 2025-09-23 07:37:57.529883 | orchestrator | changed: [testbed-manager] 2025-09-23 07:37:57.529890 | orchestrator | 2025-09-23 07:37:57.529897 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-23 07:37:57.529904 | orchestrator | Tuesday 23 September 2025 07:36:58 +0000 (0:00:00.626) 0:02:25.749 ***** 2025-09-23 07:37:57.529911 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-23 07:37:57.529918 | orchestrator | 2025-09-23 07:37:57.529925 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-23 07:37:57.529931 | orchestrator | Tuesday 23 September 2025 07:36:59 +0000 (0:00:01.474) 0:02:27.223 ***** 2025-09-23 07:37:57.529937 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-23 07:37:57.529944 | orchestrator | 2025-09-23 07:37:57.529951 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-23 07:37:57.529958 | orchestrator | Tuesday 23 September 2025 07:37:00 +0000 (0:00:00.771) 0:02:27.994 ***** 2025-09-23 07:37:57.529965 | orchestrator | changed: [testbed-manager] 2025-09-23 07:37:57.529973 | orchestrator | 2025-09-23 07:37:57.529980 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-23 07:37:57.529994 | orchestrator | Tuesday 23 September 2025 07:37:00 +0000 (0:00:00.420) 0:02:28.415 ***** 2025-09-23 07:37:57.530002 | orchestrator | changed: [testbed-manager] 2025-09-23 07:37:57.530009 | orchestrator | 2025-09-23 07:37:57.530040 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-09-23 07:37:57.530049 | orchestrator | 2025-09-23 07:37:57.530056 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-09-23 07:37:57.530063 | orchestrator | Tuesday 23 September 2025 07:37:01 +0000 (0:00:00.565) 0:02:28.980 ***** 2025-09-23 07:37:57.530070 | orchestrator | ok: [testbed-manager] 2025-09-23 07:37:57.530077 | orchestrator | 2025-09-23 07:37:57.530084 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-09-23 07:37:57.530091 | orchestrator | Tuesday 23 September 2025 07:37:01 +0000 (0:00:00.127) 0:02:29.108 ***** 2025-09-23 07:37:57.530098 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-09-23 07:37:57.530105 | orchestrator | 2025-09-23 07:37:57.530112 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-09-23 07:37:57.530119 | orchestrator | Tuesday 23 September 2025 07:37:01 +0000 (0:00:00.204) 0:02:29.313 ***** 2025-09-23 07:37:57.530126 | orchestrator | ok: [testbed-manager] 2025-09-23 07:37:57.530134 | orchestrator | 2025-09-23 07:37:57.530141 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-09-23 07:37:57.530148 | orchestrator | Tuesday 23 September 2025 07:37:02 +0000 (0:00:00.688) 0:02:30.001 ***** 2025-09-23 07:37:57.530155 | orchestrator | ok: [testbed-manager] 2025-09-23 07:37:57.530163 | orchestrator | 2025-09-23 07:37:57.530170 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-09-23 07:37:57.530177 | orchestrator | Tuesday 23 September 2025 07:37:03 +0000 (0:00:01.322) 0:02:31.323 ***** 2025-09-23 07:37:57.530184 | orchestrator | changed: [testbed-manager] 2025-09-23 07:37:57.530191 | orchestrator | 2025-09-23 07:37:57.530198 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-09-23 07:37:57.530205 | orchestrator | Tuesday 23 September 2025 07:37:04 +0000 (0:00:00.842) 0:02:32.166 ***** 2025-09-23 07:37:57.530212 | orchestrator | ok: [testbed-manager] 2025-09-23 07:37:57.530223 | orchestrator | 2025-09-23 07:37:57.530231 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-09-23 07:37:57.530238 | orchestrator | Tuesday 23 September 2025 07:37:04 +0000 (0:00:00.385) 0:02:32.551 ***** 2025-09-23 07:37:57.530245 | orchestrator | changed: [testbed-manager] 2025-09-23 07:37:57.530252 | orchestrator | 2025-09-23 07:37:57.530259 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-09-23 07:37:57.530266 | orchestrator | Tuesday 23 September 2025 07:37:12 +0000 (0:00:07.201) 0:02:39.753 ***** 2025-09-23 07:37:57.530273 | orchestrator | changed: [testbed-manager] 2025-09-23 07:37:57.530280 | orchestrator | 2025-09-23 07:37:57.530287 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-09-23 07:37:57.530295 | orchestrator | Tuesday 23 September 2025 07:37:24 +0000 (0:00:12.358) 0:02:52.111 ***** 2025-09-23 07:37:57.530303 | orchestrator | ok: [testbed-manager] 2025-09-23 07:37:57.530310 | orchestrator | 2025-09-23 07:37:57.530318 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-09-23 07:37:57.530325 | orchestrator | 2025-09-23 07:37:57.530332 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-09-23 07:37:57.530339 | orchestrator | Tuesday 23 September 2025 07:37:24 +0000 (0:00:00.459) 0:02:52.570 ***** 2025-09-23 07:37:57.530347 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:37:57.530355 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:37:57.530363 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:37:57.530370 | orchestrator | 2025-09-23 07:37:57.530378 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-09-23 07:37:57.530385 | orchestrator | Tuesday 23 September 2025 07:37:25 +0000 (0:00:00.420) 0:02:52.991 ***** 2025-09-23 07:37:57.530392 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.530399 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:37:57.530409 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:37:57.530416 | orchestrator | 2025-09-23 07:37:57.530428 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-09-23 07:37:57.530435 | orchestrator | Tuesday 23 September 2025 07:37:25 +0000 (0:00:00.345) 0:02:53.336 ***** 2025-09-23 07:37:57.530443 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:37:57.530450 | orchestrator | 2025-09-23 07:37:57.530457 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-09-23 07:37:57.530464 | orchestrator | Tuesday 23 September 2025 07:37:26 +0000 (0:00:00.672) 0:02:54.008 ***** 2025-09-23 07:37:57.530471 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.530479 | orchestrator | 2025-09-23 07:37:57.530486 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-09-23 07:37:57.530493 | orchestrator | Tuesday 23 September 2025 07:37:26 +0000 (0:00:00.191) 0:02:54.200 ***** 2025-09-23 07:37:57.530500 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.530507 | orchestrator | 2025-09-23 07:37:57.530514 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-09-23 07:37:57.530522 | orchestrator | Tuesday 23 September 2025 07:37:26 +0000 (0:00:00.221) 0:02:54.421 ***** 2025-09-23 07:37:57.530529 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.530536 | orchestrator | 2025-09-23 07:37:57.530543 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-09-23 07:37:57.530550 | orchestrator | Tuesday 23 September 2025 07:37:26 +0000 (0:00:00.251) 0:02:54.673 ***** 2025-09-23 07:37:57.530558 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.530566 | orchestrator | 2025-09-23 07:37:57.530573 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-09-23 07:37:57.530581 | orchestrator | Tuesday 23 September 2025 07:37:27 +0000 (0:00:00.255) 0:02:54.928 ***** 2025-09-23 07:37:57.530588 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.530595 | orchestrator | 2025-09-23 07:37:57.530602 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-09-23 07:37:57.530613 | orchestrator | Tuesday 23 September 2025 07:37:27 +0000 (0:00:00.274) 0:02:55.203 ***** 2025-09-23 07:37:57.530620 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.530627 | orchestrator | 2025-09-23 07:37:57.530634 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-09-23 07:37:57.530642 | orchestrator | Tuesday 23 September 2025 07:37:27 +0000 (0:00:00.248) 0:02:55.452 ***** 2025-09-23 07:37:57.530649 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.530657 | orchestrator | 2025-09-23 07:37:57.530664 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-09-23 07:37:57.530671 | orchestrator | Tuesday 23 September 2025 07:37:27 +0000 (0:00:00.249) 0:02:55.701 ***** 2025-09-23 07:37:57.530678 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.530685 | orchestrator | 2025-09-23 07:37:57.530693 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-09-23 07:37:57.530700 | orchestrator | Tuesday 23 September 2025 07:37:28 +0000 (0:00:00.319) 0:02:56.021 ***** 2025-09-23 07:37:57.530707 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.530714 | orchestrator | 2025-09-23 07:37:57.530721 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-09-23 07:37:57.530728 | orchestrator | Tuesday 23 September 2025 07:37:28 +0000 (0:00:00.244) 0:02:56.265 ***** 2025-09-23 07:37:57.530735 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-09-23 07:37:57.530742 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-09-23 07:37:57.530749 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.530756 | orchestrator | 2025-09-23 07:37:57.530763 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-09-23 07:37:57.530771 | orchestrator | Tuesday 23 September 2025 07:37:29 +0000 (0:00:00.858) 0:02:57.123 ***** 2025-09-23 07:37:57.530778 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.530785 | orchestrator | 2025-09-23 07:37:57.530792 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-09-23 07:37:57.530799 | orchestrator | Tuesday 23 September 2025 07:37:29 +0000 (0:00:00.200) 0:02:57.324 ***** 2025-09-23 07:37:57.530806 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.530813 | orchestrator | 2025-09-23 07:37:57.530820 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-09-23 07:37:57.530827 | orchestrator | Tuesday 23 September 2025 07:37:30 +0000 (0:00:00.588) 0:02:57.912 ***** 2025-09-23 07:37:57.530834 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.530841 | orchestrator | 2025-09-23 07:37:57.530849 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-09-23 07:37:57.530856 | orchestrator | Tuesday 23 September 2025 07:37:30 +0000 (0:00:00.225) 0:02:58.138 ***** 2025-09-23 07:37:57.530863 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.530870 | orchestrator | 2025-09-23 07:37:57.530877 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-09-23 07:37:57.530885 | orchestrator | Tuesday 23 September 2025 07:37:30 +0000 (0:00:00.184) 0:02:58.322 ***** 2025-09-23 07:37:57.530892 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.530900 | orchestrator | 2025-09-23 07:37:57.530907 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-09-23 07:37:57.530914 | orchestrator | Tuesday 23 September 2025 07:37:30 +0000 (0:00:00.145) 0:02:58.468 ***** 2025-09-23 07:37:57.530921 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.530927 | orchestrator | 2025-09-23 07:37:57.530934 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-09-23 07:37:57.530941 | orchestrator | Tuesday 23 September 2025 07:37:30 +0000 (0:00:00.203) 0:02:58.672 ***** 2025-09-23 07:37:57.530948 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.530955 | orchestrator | 2025-09-23 07:37:57.530962 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-09-23 07:37:57.530969 | orchestrator | Tuesday 23 September 2025 07:37:31 +0000 (0:00:00.149) 0:02:58.822 ***** 2025-09-23 07:37:57.530980 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.531012 | orchestrator | 2025-09-23 07:37:57.531022 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-09-23 07:37:57.531033 | orchestrator | Tuesday 23 September 2025 07:37:31 +0000 (0:00:00.176) 0:02:58.998 ***** 2025-09-23 07:37:57.531040 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.531047 | orchestrator | 2025-09-23 07:37:57.531054 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-09-23 07:37:57.531061 | orchestrator | Tuesday 23 September 2025 07:37:31 +0000 (0:00:00.267) 0:02:59.265 ***** 2025-09-23 07:37:57.531067 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.531074 | orchestrator | 2025-09-23 07:37:57.531081 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-09-23 07:37:57.531088 | orchestrator | Tuesday 23 September 2025 07:37:31 +0000 (0:00:00.153) 0:02:59.419 ***** 2025-09-23 07:37:57.531095 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.531102 | orchestrator | 2025-09-23 07:37:57.531109 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-09-23 07:37:57.531116 | orchestrator | Tuesday 23 September 2025 07:37:31 +0000 (0:00:00.198) 0:02:59.617 ***** 2025-09-23 07:37:57.531123 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-09-23 07:37:57.531130 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-09-23 07:37:57.531137 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-09-23 07:37:57.531143 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-09-23 07:37:57.531150 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.531157 | orchestrator | 2025-09-23 07:37:57.531164 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-09-23 07:37:57.531171 | orchestrator | Tuesday 23 September 2025 07:37:32 +0000 (0:00:00.962) 0:03:00.579 ***** 2025-09-23 07:37:57.531178 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.531185 | orchestrator | 2025-09-23 07:37:57.531192 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-09-23 07:37:57.531199 | orchestrator | Tuesday 23 September 2025 07:37:33 +0000 (0:00:00.227) 0:03:00.807 ***** 2025-09-23 07:37:57.531206 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.531213 | orchestrator | 2025-09-23 07:37:57.531220 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-09-23 07:37:57.531227 | orchestrator | Tuesday 23 September 2025 07:37:33 +0000 (0:00:00.183) 0:03:00.991 ***** 2025-09-23 07:37:57.531234 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.531240 | orchestrator | 2025-09-23 07:37:57.531247 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-09-23 07:37:57.531254 | orchestrator | Tuesday 23 September 2025 07:37:33 +0000 (0:00:00.213) 0:03:01.204 ***** 2025-09-23 07:37:57.531262 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.531269 | orchestrator | 2025-09-23 07:37:57.531276 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-09-23 07:37:57.531283 | orchestrator | Tuesday 23 September 2025 07:37:33 +0000 (0:00:00.257) 0:03:01.462 ***** 2025-09-23 07:37:57.531290 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-09-23 07:37:57.531297 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-09-23 07:37:57.531304 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.531311 | orchestrator | 2025-09-23 07:37:57.531317 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-09-23 07:37:57.531324 | orchestrator | Tuesday 23 September 2025 07:37:34 +0000 (0:00:00.425) 0:03:01.887 ***** 2025-09-23 07:37:57.531331 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.531338 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:37:57.531345 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:37:57.531352 | orchestrator | 2025-09-23 07:37:57.531359 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-09-23 07:37:57.531372 | orchestrator | Tuesday 23 September 2025 07:37:34 +0000 (0:00:00.697) 0:03:02.585 ***** 2025-09-23 07:37:57.531379 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:37:57.531386 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:37:57.531393 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:37:57.531400 | orchestrator | 2025-09-23 07:37:57.531407 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-09-23 07:37:57.531413 | orchestrator | 2025-09-23 07:37:57.531420 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-09-23 07:37:57.531427 | orchestrator | Tuesday 23 September 2025 07:37:35 +0000 (0:00:01.004) 0:03:03.589 ***** 2025-09-23 07:37:57.531434 | orchestrator | ok: [testbed-manager] 2025-09-23 07:37:57.531441 | orchestrator | 2025-09-23 07:37:57.531448 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-09-23 07:37:57.531455 | orchestrator | Tuesday 23 September 2025 07:37:35 +0000 (0:00:00.135) 0:03:03.724 ***** 2025-09-23 07:37:57.531462 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-09-23 07:37:57.531469 | orchestrator | 2025-09-23 07:37:57.531476 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-09-23 07:37:57.531484 | orchestrator | Tuesday 23 September 2025 07:37:36 +0000 (0:00:00.227) 0:03:03.952 ***** 2025-09-23 07:37:57.531490 | orchestrator | changed: [testbed-manager] 2025-09-23 07:37:57.531498 | orchestrator | 2025-09-23 07:37:57.531505 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-09-23 07:37:57.531512 | orchestrator | 2025-09-23 07:37:57.531519 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-09-23 07:37:57.531526 | orchestrator | Tuesday 23 September 2025 07:37:42 +0000 (0:00:05.834) 0:03:09.786 ***** 2025-09-23 07:37:57.531532 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:37:57.531539 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:37:57.531546 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:37:57.531553 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:37:57.531560 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:37:57.531567 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:37:57.531574 | orchestrator | 2025-09-23 07:37:57.531581 | orchestrator | TASK [Manage labels] *********************************************************** 2025-09-23 07:37:57.531590 | orchestrator | Tuesday 23 September 2025 07:37:43 +0000 (0:00:01.064) 0:03:10.852 ***** 2025-09-23 07:37:57.531601 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-23 07:37:57.531608 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-23 07:37:57.531615 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-23 07:37:57.531622 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-23 07:37:57.531629 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-23 07:37:57.531636 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-23 07:37:57.531643 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-23 07:37:57.531650 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-23 07:37:57.531657 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-23 07:37:57.531664 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-23 07:37:57.531671 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-23 07:37:57.531678 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-23 07:37:57.531685 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-23 07:37:57.531696 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-23 07:37:57.531703 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-23 07:37:57.531710 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-23 07:37:57.531716 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-23 07:37:57.531723 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-23 07:37:57.531730 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-23 07:37:57.531737 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-23 07:37:57.531744 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-23 07:37:57.531751 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-23 07:37:57.531758 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-23 07:37:57.531765 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-23 07:37:57.531772 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-23 07:37:57.531779 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-23 07:37:57.531786 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-23 07:37:57.531793 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-23 07:37:57.531800 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-23 07:37:57.531807 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-23 07:37:57.531814 | orchestrator | 2025-09-23 07:37:57.531821 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-09-23 07:37:57.531828 | orchestrator | Tuesday 23 September 2025 07:37:55 +0000 (0:00:12.237) 0:03:23.089 ***** 2025-09-23 07:37:57.531835 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:37:57.531842 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:37:57.531849 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:37:57.531856 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.531863 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:37:57.531870 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:37:57.531877 | orchestrator | 2025-09-23 07:37:57.531884 | orchestrator | TASK [Manage taints] *********************************************************** 2025-09-23 07:37:57.531891 | orchestrator | Tuesday 23 September 2025 07:37:55 +0000 (0:00:00.590) 0:03:23.680 ***** 2025-09-23 07:37:57.531898 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:37:57.531905 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:37:57.531912 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:37:57.531919 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:37:57.531925 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:37:57.531932 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:37:57.531938 | orchestrator | 2025-09-23 07:37:57.531945 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:37:57.531953 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:37:57.531960 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-09-23 07:37:57.531967 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-23 07:37:57.531978 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-23 07:37:57.532000 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-23 07:37:57.532009 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-23 07:37:57.532016 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-23 07:37:57.532023 | orchestrator | 2025-09-23 07:37:57.532030 | orchestrator | 2025-09-23 07:37:57.532038 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:37:57.532045 | orchestrator | Tuesday 23 September 2025 07:37:56 +0000 (0:00:00.479) 0:03:24.160 ***** 2025-09-23 07:37:57.532056 | orchestrator | =============================================================================== 2025-09-23 07:37:57.532063 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 44.41s 2025-09-23 07:37:57.532071 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 24.92s 2025-09-23 07:37:57.532078 | orchestrator | kubectl : Install required packages ------------------------------------ 12.36s 2025-09-23 07:37:57.532085 | orchestrator | Manage labels ---------------------------------------------------------- 12.24s 2025-09-23 07:37:57.532092 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 12.02s 2025-09-23 07:37:57.532099 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.20s 2025-09-23 07:37:57.532106 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.83s 2025-09-23 07:37:57.532113 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.43s 2025-09-23 07:37:57.532120 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.93s 2025-09-23 07:37:57.532127 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.76s 2025-09-23 07:37:57.532134 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.58s 2025-09-23 07:37:57.532141 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.41s 2025-09-23 07:37:57.532149 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 2.19s 2025-09-23 07:37:57.532156 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 2.01s 2025-09-23 07:37:57.532163 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.96s 2025-09-23 07:37:57.532170 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 1.62s 2025-09-23 07:37:57.532177 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.49s 2025-09-23 07:37:57.532184 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.47s 2025-09-23 07:37:57.532191 | orchestrator | k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry --- 1.45s 2025-09-23 07:37:57.532198 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.38s 2025-09-23 07:37:57.532206 | orchestrator | 2025-09-23 07:37:57 | INFO  | Task 77691318-0af4-4a5f-a396-b827e7cfd8b1 is in state STARTED 2025-09-23 07:37:57.532213 | orchestrator | 2025-09-23 07:37:57 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:37:57.532220 | orchestrator | 2025-09-23 07:37:57 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:38:00.578282 | orchestrator | 2025-09-23 07:38:00 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:38:00.582000 | orchestrator | 2025-09-23 07:38:00 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:38:00.583222 | orchestrator | 2025-09-23 07:38:00 | INFO  | Task a78ecba2-fe64-4791-9b10-047fe19b610b is in state STARTED 2025-09-23 07:38:00.584527 | orchestrator | 2025-09-23 07:38:00 | INFO  | Task 77691318-0af4-4a5f-a396-b827e7cfd8b1 is in state SUCCESS 2025-09-23 07:38:00.584682 | orchestrator | 2025-09-23 07:38:00.586660 | orchestrator | 2025-09-23 07:38:00.586691 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-23 07:38:00.586704 | orchestrator | 2025-09-23 07:38:00.586715 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-23 07:38:00.586727 | orchestrator | Tuesday 23 September 2025 07:36:53 +0000 (0:00:00.415) 0:00:00.415 ***** 2025-09-23 07:38:00.586738 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:38:00.586750 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:38:00.586760 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:38:00.586771 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:38:00.586781 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:38:00.586792 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:38:00.586803 | orchestrator | 2025-09-23 07:38:00.586813 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-23 07:38:00.586824 | orchestrator | Tuesday 23 September 2025 07:36:54 +0000 (0:00:01.071) 0:00:01.487 ***** 2025-09-23 07:38:00.586848 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-23 07:38:00.586859 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-23 07:38:00.586870 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-23 07:38:00.586880 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-23 07:38:00.586891 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-23 07:38:00.586902 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-23 07:38:00.586912 | orchestrator | 2025-09-23 07:38:00.586923 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-09-23 07:38:00.586933 | orchestrator | 2025-09-23 07:38:00.586944 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-09-23 07:38:00.586955 | orchestrator | Tuesday 23 September 2025 07:36:55 +0000 (0:00:01.010) 0:00:02.497 ***** 2025-09-23 07:38:00.586966 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:38:00.586979 | orchestrator | 2025-09-23 07:38:00.587016 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-23 07:38:00.587027 | orchestrator | Tuesday 23 September 2025 07:36:58 +0000 (0:00:02.084) 0:00:04.582 ***** 2025-09-23 07:38:00.587038 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-23 07:38:00.587050 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-23 07:38:00.587060 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-23 07:38:00.587071 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-23 07:38:00.587082 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-23 07:38:00.587093 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-23 07:38:00.587103 | orchestrator | 2025-09-23 07:38:00.587114 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-23 07:38:00.587125 | orchestrator | Tuesday 23 September 2025 07:36:59 +0000 (0:00:01.360) 0:00:05.943 ***** 2025-09-23 07:38:00.587136 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-23 07:38:00.587146 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-23 07:38:00.587157 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-23 07:38:00.587168 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-23 07:38:00.587178 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-23 07:38:00.587203 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-23 07:38:00.587214 | orchestrator | 2025-09-23 07:38:00.587225 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-23 07:38:00.587236 | orchestrator | Tuesday 23 September 2025 07:37:01 +0000 (0:00:01.624) 0:00:07.567 ***** 2025-09-23 07:38:00.587246 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-09-23 07:38:00.587259 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:38:00.587273 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-09-23 07:38:00.587286 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:38:00.587299 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-09-23 07:38:00.587311 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:38:00.587323 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-09-23 07:38:00.587335 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:38:00.587348 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-09-23 07:38:00.587361 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:38:00.587373 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-09-23 07:38:00.587385 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:38:00.587397 | orchestrator | 2025-09-23 07:38:00.587409 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-09-23 07:38:00.587422 | orchestrator | Tuesday 23 September 2025 07:37:02 +0000 (0:00:01.209) 0:00:08.776 ***** 2025-09-23 07:38:00.587435 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:38:00.587448 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:38:00.587460 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:38:00.587472 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:38:00.587484 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:38:00.587496 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:38:00.587508 | orchestrator | 2025-09-23 07:38:00.587520 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-09-23 07:38:00.587532 | orchestrator | Tuesday 23 September 2025 07:37:03 +0000 (0:00:00.807) 0:00:09.584 ***** 2025-09-23 07:38:00.587565 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-23 07:38:00.587588 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-23 07:38:00.587601 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-23 07:38:00.587622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-23 07:38:00.587634 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-23 07:38:00.587645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-23 07:38:00.587664 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-23 07:38:00.587680 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-23 07:38:00.587692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-23 07:38:00.587709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-23 07:38:00.587721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-23 07:38:00.587738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-23 07:38:00.587749 | orchestrator | 2025-09-23 07:38:00.587760 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-09-23 07:38:00.587772 | orchestrator | Tuesday 23 September 2025 07:37:04 +0000 (0:00:01.799) 0:00:11.384 ***** 2025-09-23 07:38:00.587788 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-23 07:38:00.587800 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-23 07:38:00.587817 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-23 07:38:00.587828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-23 07:38:00.587840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-23 07:38:00.587868 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-23 07:38:00.587891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-23 07:38:00.587909 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-23 07:38:00.587920 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-23 07:38:00.587931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-23 07:38:00.587942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-23 07:38:00.587961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-23 07:38:00.587973 | orchestrator | 2025-09-23 07:38:00.588001 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-09-23 07:38:00.588013 | orchestrator | Tuesday 23 September 2025 07:37:08 +0000 (0:00:03.289) 0:00:14.673 ***** 2025-09-23 07:38:00.588024 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:38:00.588035 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:38:00.588051 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:38:00.588062 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:38:00.588081 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:38:00.588092 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:38:00.588102 | orchestrator | 2025-09-23 07:38:00.588113 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-09-23 07:38:00.588124 | orchestrator | Tuesday 23 September 2025 07:37:10 +0000 (0:00:02.189) 0:00:16.863 ***** 2025-09-23 07:38:00.588135 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-23 07:38:00.588147 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-23 07:38:00.588158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-23 07:38:00.588169 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-23 07:38:00.588187 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-23 07:38:00.588211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-23 07:38:00.588223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-23 07:38:00.588234 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-23 07:38:00.588246 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-23 07:38:00.588257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-23 07:38:00.588275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-23 07:38:00.588298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-23 07:38:00.588309 | orchestrator | 2025-09-23 07:38:00.588320 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-23 07:38:00.588331 | orchestrator | Tuesday 23 September 2025 07:37:13 +0000 (0:00:03.353) 0:00:20.216 ***** 2025-09-23 07:38:00.588342 | orchestrator | 2025-09-23 07:38:00.588353 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-23 07:38:00.588363 | orchestrator | Tuesday 23 September 2025 07:37:13 +0000 (0:00:00.303) 0:00:20.520 ***** 2025-09-23 07:38:00.588374 | orchestrator | 2025-09-23 07:38:00.588385 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-23 07:38:00.588395 | orchestrator | Tuesday 23 September 2025 07:37:14 +0000 (0:00:00.127) 0:00:20.647 ***** 2025-09-23 07:38:00.588406 | orchestrator | 2025-09-23 07:38:00.588417 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-23 07:38:00.588427 | orchestrator | Tuesday 23 September 2025 07:37:14 +0000 (0:00:00.126) 0:00:20.774 ***** 2025-09-23 07:38:00.588438 | orchestrator | 2025-09-23 07:38:00.588449 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-23 07:38:00.588459 | orchestrator | Tuesday 23 September 2025 07:37:14 +0000 (0:00:00.123) 0:00:20.897 ***** 2025-09-23 07:38:00.588470 | orchestrator | 2025-09-23 07:38:00.588481 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-23 07:38:00.588491 | orchestrator | Tuesday 23 September 2025 07:37:14 +0000 (0:00:00.120) 0:00:21.018 ***** 2025-09-23 07:38:00.588502 | orchestrator | 2025-09-23 07:38:00.588513 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-09-23 07:38:00.588523 | orchestrator | Tuesday 23 September 2025 07:37:14 +0000 (0:00:00.223) 0:00:21.241 ***** 2025-09-23 07:38:00.588534 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:38:00.588545 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:38:00.588556 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:38:00.588566 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:38:00.588577 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:38:00.588588 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:38:00.588599 | orchestrator | 2025-09-23 07:38:00.588610 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-09-23 07:38:00.588620 | orchestrator | Tuesday 23 September 2025 07:37:25 +0000 (0:00:10.315) 0:00:31.557 ***** 2025-09-23 07:38:00.588631 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:38:00.588642 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:38:00.588653 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:38:00.588663 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:38:00.588674 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:38:00.588684 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:38:00.588695 | orchestrator | 2025-09-23 07:38:00.588705 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-23 07:38:00.588716 | orchestrator | Tuesday 23 September 2025 07:37:26 +0000 (0:00:01.840) 0:00:33.397 ***** 2025-09-23 07:38:00.588727 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:38:00.588744 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:38:00.588755 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:38:00.588765 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:38:00.588776 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:38:00.588787 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:38:00.588798 | orchestrator | 2025-09-23 07:38:00.588809 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-09-23 07:38:00.588819 | orchestrator | Tuesday 23 September 2025 07:37:33 +0000 (0:00:06.971) 0:00:40.369 ***** 2025-09-23 07:38:00.588830 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-09-23 07:38:00.588841 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-09-23 07:38:00.588852 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-09-23 07:38:00.588863 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-09-23 07:38:00.588874 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-09-23 07:38:00.588890 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-09-23 07:38:00.588901 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-09-23 07:38:00.588912 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-09-23 07:38:00.588923 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-09-23 07:38:00.588933 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-09-23 07:38:00.588948 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-09-23 07:38:00.588959 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-09-23 07:38:00.588970 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-23 07:38:00.588980 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-23 07:38:00.589032 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-23 07:38:00.589043 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-23 07:38:00.589053 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-23 07:38:00.589064 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-23 07:38:00.589075 | orchestrator | 2025-09-23 07:38:00.589085 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-09-23 07:38:00.589096 | orchestrator | Tuesday 23 September 2025 07:37:42 +0000 (0:00:08.223) 0:00:48.593 ***** 2025-09-23 07:38:00.589107 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-09-23 07:38:00.589118 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:38:00.589129 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-09-23 07:38:00.589140 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:38:00.589150 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-09-23 07:38:00.589161 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:38:00.589172 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-09-23 07:38:00.589183 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-09-23 07:38:00.589201 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-09-23 07:38:00.589211 | orchestrator | 2025-09-23 07:38:00.589222 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-09-23 07:38:00.589233 | orchestrator | Tuesday 23 September 2025 07:37:46 +0000 (0:00:04.000) 0:00:52.594 ***** 2025-09-23 07:38:00.589244 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-09-23 07:38:00.589255 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:38:00.589265 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-09-23 07:38:00.589276 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:38:00.589287 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-09-23 07:38:00.589297 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:38:00.589308 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-09-23 07:38:00.589319 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-09-23 07:38:00.589330 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-09-23 07:38:00.589340 | orchestrator | 2025-09-23 07:38:00.589351 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-23 07:38:00.589362 | orchestrator | Tuesday 23 September 2025 07:37:50 +0000 (0:00:03.949) 0:00:56.543 ***** 2025-09-23 07:38:00.589372 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:38:00.589383 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:38:00.589394 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:38:00.589405 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:38:00.589415 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:38:00.589426 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:38:00.589437 | orchestrator | 2025-09-23 07:38:00.589447 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:38:00.589459 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-23 07:38:00.589470 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-23 07:38:00.589481 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-23 07:38:00.589492 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-23 07:38:00.589503 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-23 07:38:00.589521 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-23 07:38:00.589532 | orchestrator | 2025-09-23 07:38:00.589543 | orchestrator | 2025-09-23 07:38:00.589554 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:38:00.589565 | orchestrator | Tuesday 23 September 2025 07:37:58 +0000 (0:00:08.922) 0:01:05.466 ***** 2025-09-23 07:38:00.589576 | orchestrator | =============================================================================== 2025-09-23 07:38:00.589586 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 15.89s 2025-09-23 07:38:00.589597 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.32s 2025-09-23 07:38:00.589608 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.22s 2025-09-23 07:38:00.589624 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 4.00s 2025-09-23 07:38:00.589635 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.95s 2025-09-23 07:38:00.589645 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.35s 2025-09-23 07:38:00.589663 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.29s 2025-09-23 07:38:00.589673 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.19s 2025-09-23 07:38:00.589684 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.08s 2025-09-23 07:38:00.589694 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.84s 2025-09-23 07:38:00.589705 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.80s 2025-09-23 07:38:00.589716 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.62s 2025-09-23 07:38:00.589726 | orchestrator | module-load : Load modules ---------------------------------------------- 1.36s 2025-09-23 07:38:00.589737 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.21s 2025-09-23 07:38:00.589747 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.07s 2025-09-23 07:38:00.589758 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.03s 2025-09-23 07:38:00.589769 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.01s 2025-09-23 07:38:00.589779 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.81s 2025-09-23 07:38:00.589790 | orchestrator | 2025-09-23 07:38:00 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:38:00.589801 | orchestrator | 2025-09-23 07:38:00 | INFO  | Task 23ed0e6a-d6b0-49fe-a599-fca8f815d943 is in state STARTED 2025-09-23 07:38:00.590769 | orchestrator | 2025-09-23 07:38:00 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:38:03.618522 | orchestrator | 2025-09-23 07:38:03 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:38:03.619124 | orchestrator | 2025-09-23 07:38:03 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:38:03.619797 | orchestrator | 2025-09-23 07:38:03 | INFO  | Task a78ecba2-fe64-4791-9b10-047fe19b610b is in state STARTED 2025-09-23 07:38:03.621278 | orchestrator | 2025-09-23 07:38:03 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:38:03.621848 | orchestrator | 2025-09-23 07:38:03 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:38:03.622857 | orchestrator | 2025-09-23 07:38:03 | INFO  | Task 23ed0e6a-d6b0-49fe-a599-fca8f815d943 is in state STARTED 2025-09-23 07:38:03.622893 | orchestrator | 2025-09-23 07:38:03 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:38:06.661350 | orchestrator | 2025-09-23 07:38:06 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:38:06.662175 | orchestrator | 2025-09-23 07:38:06 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:38:06.663967 | orchestrator | 2025-09-23 07:38:06 | INFO  | Task a78ecba2-fe64-4791-9b10-047fe19b610b is in state STARTED 2025-09-23 07:38:06.664535 | orchestrator | 2025-09-23 07:38:06 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:38:06.665178 | orchestrator | 2025-09-23 07:38:06 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:38:06.665643 | orchestrator | 2025-09-23 07:38:06 | INFO  | Task 23ed0e6a-d6b0-49fe-a599-fca8f815d943 is in state SUCCESS 2025-09-23 07:38:06.667244 | orchestrator | 2025-09-23 07:38:06 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:38:09.690588 | orchestrator | 2025-09-23 07:38:09 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:38:09.691178 | orchestrator | 2025-09-23 07:38:09 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:38:09.694091 | orchestrator | 2025-09-23 07:38:09 | INFO  | Task a78ecba2-fe64-4791-9b10-047fe19b610b is in state SUCCESS 2025-09-23 07:38:09.694921 | orchestrator | 2025-09-23 07:38:09 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:38:09.696061 | orchestrator | 2025-09-23 07:38:09 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:38:09.696271 | orchestrator | 2025-09-23 07:38:09 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:38:12.744841 | orchestrator | 2025-09-23 07:38:12 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:38:12.747948 | orchestrator | 2025-09-23 07:38:12 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:38:12.750453 | orchestrator | 2025-09-23 07:38:12 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:38:12.752504 | orchestrator | 2025-09-23 07:38:12 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:38:12.752862 | orchestrator | 2025-09-23 07:38:12 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:38:15.804052 | orchestrator | 2025-09-23 07:38:15 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:38:15.806956 | orchestrator | 2025-09-23 07:38:15 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:38:15.811474 | orchestrator | 2025-09-23 07:38:15 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:38:15.814382 | orchestrator | 2025-09-23 07:38:15 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:38:15.814448 | orchestrator | 2025-09-23 07:38:15 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:38:18.846777 | orchestrator | 2025-09-23 07:38:18 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:38:18.848097 | orchestrator | 2025-09-23 07:38:18 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:38:18.850678 | orchestrator | 2025-09-23 07:38:18 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:38:18.851301 | orchestrator | 2025-09-23 07:38:18 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:38:18.851603 | orchestrator | 2025-09-23 07:38:18 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:38:21.877777 | orchestrator | 2025-09-23 07:38:21 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:38:21.878219 | orchestrator | 2025-09-23 07:38:21 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:38:21.878696 | orchestrator | 2025-09-23 07:38:21 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:38:21.879343 | orchestrator | 2025-09-23 07:38:21 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:38:21.879392 | orchestrator | 2025-09-23 07:38:21 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:38:24.921733 | orchestrator | 2025-09-23 07:38:24 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:38:24.922725 | orchestrator | 2025-09-23 07:38:24 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:38:24.924090 | orchestrator | 2025-09-23 07:38:24 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:38:24.925240 | orchestrator | 2025-09-23 07:38:24 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:38:24.925295 | orchestrator | 2025-09-23 07:38:24 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:38:27.959607 | orchestrator | 2025-09-23 07:38:27 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:38:27.961653 | orchestrator | 2025-09-23 07:38:27 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:38:27.964237 | orchestrator | 2025-09-23 07:38:27 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:38:27.966243 | orchestrator | 2025-09-23 07:38:27 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:38:27.966309 | orchestrator | 2025-09-23 07:38:27 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:38:30.995135 | orchestrator | 2025-09-23 07:38:30 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:38:30.998233 | orchestrator | 2025-09-23 07:38:30 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:38:30.999580 | orchestrator | 2025-09-23 07:38:30 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:38:31.001176 | orchestrator | 2025-09-23 07:38:30 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:38:31.001222 | orchestrator | 2025-09-23 07:38:30 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:38:34.045537 | orchestrator | 2025-09-23 07:38:34 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:38:34.046654 | orchestrator | 2025-09-23 07:38:34 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:38:34.046828 | orchestrator | 2025-09-23 07:38:34 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:38:34.048851 | orchestrator | 2025-09-23 07:38:34 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:38:34.048910 | orchestrator | 2025-09-23 07:38:34 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:38:37.083416 | orchestrator | 2025-09-23 07:38:37 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:38:37.083523 | orchestrator | 2025-09-23 07:38:37 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:38:37.083550 | orchestrator | 2025-09-23 07:38:37 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:38:37.083858 | orchestrator | 2025-09-23 07:38:37 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:38:37.084814 | orchestrator | 2025-09-23 07:38:37 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:38:40.115843 | orchestrator | 2025-09-23 07:38:40 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:38:40.116573 | orchestrator | 2025-09-23 07:38:40 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:38:40.118879 | orchestrator | 2025-09-23 07:38:40 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:38:40.121681 | orchestrator | 2025-09-23 07:38:40 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:38:40.121708 | orchestrator | 2025-09-23 07:38:40 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:38:43.163037 | orchestrator | 2025-09-23 07:38:43 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:38:43.165249 | orchestrator | 2025-09-23 07:38:43 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:38:43.168363 | orchestrator | 2025-09-23 07:38:43 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:38:43.172204 | orchestrator | 2025-09-23 07:38:43 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:38:43.173416 | orchestrator | 2025-09-23 07:38:43 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:38:46.236227 | orchestrator | 2025-09-23 07:38:46 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:38:46.236852 | orchestrator | 2025-09-23 07:38:46 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:38:46.238143 | orchestrator | 2025-09-23 07:38:46 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:38:46.239537 | orchestrator | 2025-09-23 07:38:46 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:38:46.239559 | orchestrator | 2025-09-23 07:38:46 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:38:49.282861 | orchestrator | 2025-09-23 07:38:49 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:38:49.283250 | orchestrator | 2025-09-23 07:38:49 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:38:49.284214 | orchestrator | 2025-09-23 07:38:49 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:38:49.285531 | orchestrator | 2025-09-23 07:38:49 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:38:49.285586 | orchestrator | 2025-09-23 07:38:49 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:38:52.333330 | orchestrator | 2025-09-23 07:38:52 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:38:52.333542 | orchestrator | 2025-09-23 07:38:52 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:38:52.335257 | orchestrator | 2025-09-23 07:38:52 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:38:52.339124 | orchestrator | 2025-09-23 07:38:52 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:38:52.339191 | orchestrator | 2025-09-23 07:38:52 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:38:55.367906 | orchestrator | 2025-09-23 07:38:55 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:38:55.369839 | orchestrator | 2025-09-23 07:38:55 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:38:55.373155 | orchestrator | 2025-09-23 07:38:55 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:38:55.373479 | orchestrator | 2025-09-23 07:38:55 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:38:55.373550 | orchestrator | 2025-09-23 07:38:55 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:38:58.411675 | orchestrator | 2025-09-23 07:38:58 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:38:58.414573 | orchestrator | 2025-09-23 07:38:58 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:38:58.417454 | orchestrator | 2025-09-23 07:38:58 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:38:58.419212 | orchestrator | 2025-09-23 07:38:58 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:38:58.419738 | orchestrator | 2025-09-23 07:38:58 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:39:01.455270 | orchestrator | 2025-09-23 07:39:01 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:39:01.457194 | orchestrator | 2025-09-23 07:39:01 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:39:01.459670 | orchestrator | 2025-09-23 07:39:01 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:39:01.461959 | orchestrator | 2025-09-23 07:39:01 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:39:01.462106 | orchestrator | 2025-09-23 07:39:01 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:39:04.505030 | orchestrator | 2025-09-23 07:39:04 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:39:04.507232 | orchestrator | 2025-09-23 07:39:04 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:39:04.509320 | orchestrator | 2025-09-23 07:39:04 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:39:04.511248 | orchestrator | 2025-09-23 07:39:04 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:39:04.511280 | orchestrator | 2025-09-23 07:39:04 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:39:07.559269 | orchestrator | 2025-09-23 07:39:07 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:39:07.561784 | orchestrator | 2025-09-23 07:39:07 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:39:07.563829 | orchestrator | 2025-09-23 07:39:07 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:39:07.567697 | orchestrator | 2025-09-23 07:39:07 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:39:07.567740 | orchestrator | 2025-09-23 07:39:07 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:39:10.608386 | orchestrator | 2025-09-23 07:39:10 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:39:10.610709 | orchestrator | 2025-09-23 07:39:10 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:39:10.612548 | orchestrator | 2025-09-23 07:39:10 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:39:10.614219 | orchestrator | 2025-09-23 07:39:10 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:39:10.614338 | orchestrator | 2025-09-23 07:39:10 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:39:13.656834 | orchestrator | 2025-09-23 07:39:13 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:39:13.657000 | orchestrator | 2025-09-23 07:39:13 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:39:13.657408 | orchestrator | 2025-09-23 07:39:13 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:39:13.658092 | orchestrator | 2025-09-23 07:39:13 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:39:13.658295 | orchestrator | 2025-09-23 07:39:13 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:39:16.694225 | orchestrator | 2025-09-23 07:39:16 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:39:16.694473 | orchestrator | 2025-09-23 07:39:16 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:39:16.696063 | orchestrator | 2025-09-23 07:39:16 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:39:16.698237 | orchestrator | 2025-09-23 07:39:16 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:39:16.698293 | orchestrator | 2025-09-23 07:39:16 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:39:19.738160 | orchestrator | 2025-09-23 07:39:19 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:39:19.738423 | orchestrator | 2025-09-23 07:39:19 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:39:19.740985 | orchestrator | 2025-09-23 07:39:19 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:39:19.741673 | orchestrator | 2025-09-23 07:39:19 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:39:19.741831 | orchestrator | 2025-09-23 07:39:19 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:39:22.774064 | orchestrator | 2025-09-23 07:39:22 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:39:22.774232 | orchestrator | 2025-09-23 07:39:22 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:39:22.774958 | orchestrator | 2025-09-23 07:39:22 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:39:22.775545 | orchestrator | 2025-09-23 07:39:22 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:39:22.775609 | orchestrator | 2025-09-23 07:39:22 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:39:25.808972 | orchestrator | 2025-09-23 07:39:25 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:39:25.809192 | orchestrator | 2025-09-23 07:39:25 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:39:25.809763 | orchestrator | 2025-09-23 07:39:25 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:39:25.810417 | orchestrator | 2025-09-23 07:39:25 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:39:25.810457 | orchestrator | 2025-09-23 07:39:25 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:39:28.830939 | orchestrator | 2025-09-23 07:39:28 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:39:28.831024 | orchestrator | 2025-09-23 07:39:28 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:39:28.832653 | orchestrator | 2025-09-23 07:39:28 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:39:28.835213 | orchestrator | 2025-09-23 07:39:28 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:39:28.835714 | orchestrator | 2025-09-23 07:39:28 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:39:31.867393 | orchestrator | 2025-09-23 07:39:31 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:39:31.867480 | orchestrator | 2025-09-23 07:39:31 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:39:31.867872 | orchestrator | 2025-09-23 07:39:31 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:39:31.868771 | orchestrator | 2025-09-23 07:39:31 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:39:31.868851 | orchestrator | 2025-09-23 07:39:31 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:39:34.903607 | orchestrator | 2025-09-23 07:39:34 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:39:34.903793 | orchestrator | 2025-09-23 07:39:34 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:39:34.905565 | orchestrator | 2025-09-23 07:39:34 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:39:34.906271 | orchestrator | 2025-09-23 07:39:34 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:39:34.906397 | orchestrator | 2025-09-23 07:39:34 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:39:37.943506 | orchestrator | 2025-09-23 07:39:37 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:39:37.943763 | orchestrator | 2025-09-23 07:39:37 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:39:37.944180 | orchestrator | 2025-09-23 07:39:37 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:39:37.945027 | orchestrator | 2025-09-23 07:39:37 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state STARTED 2025-09-23 07:39:37.945281 | orchestrator | 2025-09-23 07:39:37 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:39:40.983945 | orchestrator | 2025-09-23 07:39:40 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:39:40.985542 | orchestrator | 2025-09-23 07:39:40 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:39:40.988168 | orchestrator | 2025-09-23 07:39:40 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:39:40.990368 | orchestrator | 2025-09-23 07:39:40 | INFO  | Task 4f6d196f-84bf-437c-83a6-8a1c96a52794 is in state SUCCESS 2025-09-23 07:39:40.991971 | orchestrator | 2025-09-23 07:39:40.992007 | orchestrator | 2025-09-23 07:39:40.992019 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-09-23 07:39:40.992032 | orchestrator | 2025-09-23 07:39:40.992043 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-23 07:39:40.992054 | orchestrator | Tuesday 23 September 2025 07:38:01 +0000 (0:00:00.168) 0:00:00.168 ***** 2025-09-23 07:39:40.992079 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-23 07:39:40.992091 | orchestrator | 2025-09-23 07:39:40.992102 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-23 07:39:40.992113 | orchestrator | Tuesday 23 September 2025 07:38:02 +0000 (0:00:00.785) 0:00:00.953 ***** 2025-09-23 07:39:40.992124 | orchestrator | changed: [testbed-manager] 2025-09-23 07:39:40.992135 | orchestrator | 2025-09-23 07:39:40.992146 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-09-23 07:39:40.992157 | orchestrator | Tuesday 23 September 2025 07:38:03 +0000 (0:00:01.316) 0:00:02.270 ***** 2025-09-23 07:39:40.992167 | orchestrator | changed: [testbed-manager] 2025-09-23 07:39:40.992178 | orchestrator | 2025-09-23 07:39:40.992189 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:39:40.992200 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:39:40.992212 | orchestrator | 2025-09-23 07:39:40.992223 | orchestrator | 2025-09-23 07:39:40.992233 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:39:40.992244 | orchestrator | Tuesday 23 September 2025 07:38:03 +0000 (0:00:00.433) 0:00:02.703 ***** 2025-09-23 07:39:40.992255 | orchestrator | =============================================================================== 2025-09-23 07:39:40.992280 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.32s 2025-09-23 07:39:40.992291 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.79s 2025-09-23 07:39:40.992302 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.43s 2025-09-23 07:39:40.992313 | orchestrator | 2025-09-23 07:39:40.992325 | orchestrator | 2025-09-23 07:39:40.992335 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-23 07:39:40.992346 | orchestrator | 2025-09-23 07:39:40.992356 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-23 07:39:40.992367 | orchestrator | Tuesday 23 September 2025 07:38:01 +0000 (0:00:00.245) 0:00:00.245 ***** 2025-09-23 07:39:40.992414 | orchestrator | ok: [testbed-manager] 2025-09-23 07:39:40.992427 | orchestrator | 2025-09-23 07:39:40.992437 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-23 07:39:40.992448 | orchestrator | Tuesday 23 September 2025 07:38:02 +0000 (0:00:00.663) 0:00:00.909 ***** 2025-09-23 07:39:40.992459 | orchestrator | ok: [testbed-manager] 2025-09-23 07:39:40.992469 | orchestrator | 2025-09-23 07:39:40.992479 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-23 07:39:40.992490 | orchestrator | Tuesday 23 September 2025 07:38:02 +0000 (0:00:00.539) 0:00:01.449 ***** 2025-09-23 07:39:40.992501 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-23 07:39:40.992511 | orchestrator | 2025-09-23 07:39:40.992522 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-23 07:39:40.992532 | orchestrator | Tuesday 23 September 2025 07:38:03 +0000 (0:00:00.700) 0:00:02.149 ***** 2025-09-23 07:39:40.992543 | orchestrator | changed: [testbed-manager] 2025-09-23 07:39:40.992554 | orchestrator | 2025-09-23 07:39:40.992564 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-23 07:39:40.992575 | orchestrator | Tuesday 23 September 2025 07:38:04 +0000 (0:00:01.122) 0:00:03.272 ***** 2025-09-23 07:39:40.992585 | orchestrator | changed: [testbed-manager] 2025-09-23 07:39:40.992596 | orchestrator | 2025-09-23 07:39:40.992607 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-23 07:39:40.992617 | orchestrator | Tuesday 23 September 2025 07:38:05 +0000 (0:00:00.725) 0:00:03.997 ***** 2025-09-23 07:39:40.992628 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-23 07:39:40.992638 | orchestrator | 2025-09-23 07:39:40.992649 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-23 07:39:40.992659 | orchestrator | Tuesday 23 September 2025 07:38:06 +0000 (0:00:01.223) 0:00:05.221 ***** 2025-09-23 07:39:40.992670 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-23 07:39:40.992680 | orchestrator | 2025-09-23 07:39:40.992692 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-23 07:39:40.992702 | orchestrator | Tuesday 23 September 2025 07:38:07 +0000 (0:00:00.768) 0:00:05.989 ***** 2025-09-23 07:39:40.992713 | orchestrator | ok: [testbed-manager] 2025-09-23 07:39:40.992724 | orchestrator | 2025-09-23 07:39:40.992734 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-23 07:39:40.992745 | orchestrator | Tuesday 23 September 2025 07:38:07 +0000 (0:00:00.357) 0:00:06.346 ***** 2025-09-23 07:39:40.992755 | orchestrator | ok: [testbed-manager] 2025-09-23 07:39:40.992766 | orchestrator | 2025-09-23 07:39:40.992776 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:39:40.992787 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:39:40.992798 | orchestrator | 2025-09-23 07:39:40.992865 | orchestrator | 2025-09-23 07:39:40.992876 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:39:40.992887 | orchestrator | Tuesday 23 September 2025 07:38:07 +0000 (0:00:00.265) 0:00:06.612 ***** 2025-09-23 07:39:40.992898 | orchestrator | =============================================================================== 2025-09-23 07:39:40.992908 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.22s 2025-09-23 07:39:40.992919 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.12s 2025-09-23 07:39:40.992930 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.77s 2025-09-23 07:39:40.992950 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.73s 2025-09-23 07:39:40.992960 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.70s 2025-09-23 07:39:40.992970 | orchestrator | Get home directory of operator user ------------------------------------- 0.66s 2025-09-23 07:39:40.992988 | orchestrator | Create .kube directory -------------------------------------------------- 0.54s 2025-09-23 07:39:40.992997 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.36s 2025-09-23 07:39:40.993006 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.27s 2025-09-23 07:39:40.993016 | orchestrator | 2025-09-23 07:39:40.993025 | orchestrator | 2025-09-23 07:39:40.993035 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-09-23 07:39:40.993044 | orchestrator | 2025-09-23 07:39:40.993054 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-23 07:39:40.993063 | orchestrator | Tuesday 23 September 2025 07:37:14 +0000 (0:00:00.076) 0:00:00.076 ***** 2025-09-23 07:39:40.993072 | orchestrator | ok: [localhost] => { 2025-09-23 07:39:40.993082 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-09-23 07:39:40.993092 | orchestrator | } 2025-09-23 07:39:40.993102 | orchestrator | 2025-09-23 07:39:40.993111 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-09-23 07:39:40.993121 | orchestrator | Tuesday 23 September 2025 07:37:14 +0000 (0:00:00.053) 0:00:00.129 ***** 2025-09-23 07:39:40.993131 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-09-23 07:39:40.993147 | orchestrator | ...ignoring 2025-09-23 07:39:40.993157 | orchestrator | 2025-09-23 07:39:40.993167 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-09-23 07:39:40.993176 | orchestrator | Tuesday 23 September 2025 07:37:17 +0000 (0:00:03.458) 0:00:03.588 ***** 2025-09-23 07:39:40.993186 | orchestrator | skipping: [localhost] 2025-09-23 07:39:40.993195 | orchestrator | 2025-09-23 07:39:40.993205 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-09-23 07:39:40.993214 | orchestrator | Tuesday 23 September 2025 07:37:18 +0000 (0:00:00.073) 0:00:03.661 ***** 2025-09-23 07:39:40.993223 | orchestrator | ok: [localhost] 2025-09-23 07:39:40.993233 | orchestrator | 2025-09-23 07:39:40.993243 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-23 07:39:40.993252 | orchestrator | 2025-09-23 07:39:40.993261 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-23 07:39:40.993271 | orchestrator | Tuesday 23 September 2025 07:37:18 +0000 (0:00:00.178) 0:00:03.840 ***** 2025-09-23 07:39:40.993280 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:39:40.993290 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:39:40.993299 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:39:40.993309 | orchestrator | 2025-09-23 07:39:40.993318 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-23 07:39:40.993328 | orchestrator | Tuesday 23 September 2025 07:37:18 +0000 (0:00:00.560) 0:00:04.400 ***** 2025-09-23 07:39:40.993337 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-09-23 07:39:40.993347 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-09-23 07:39:40.993356 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-09-23 07:39:40.993365 | orchestrator | 2025-09-23 07:39:40.993375 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-09-23 07:39:40.993384 | orchestrator | 2025-09-23 07:39:40.993394 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-23 07:39:40.993403 | orchestrator | Tuesday 23 September 2025 07:37:19 +0000 (0:00:00.510) 0:00:04.910 ***** 2025-09-23 07:39:40.993413 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:39:40.993422 | orchestrator | 2025-09-23 07:39:40.993431 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-23 07:39:40.993441 | orchestrator | Tuesday 23 September 2025 07:37:19 +0000 (0:00:00.524) 0:00:05.434 ***** 2025-09-23 07:39:40.993450 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:39:40.993460 | orchestrator | 2025-09-23 07:39:40.993475 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-09-23 07:39:40.993484 | orchestrator | Tuesday 23 September 2025 07:37:21 +0000 (0:00:01.181) 0:00:06.616 ***** 2025-09-23 07:39:40.993494 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:39:40.993503 | orchestrator | 2025-09-23 07:39:40.993513 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-09-23 07:39:40.993522 | orchestrator | Tuesday 23 September 2025 07:37:21 +0000 (0:00:00.329) 0:00:06.946 ***** 2025-09-23 07:39:40.993532 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:39:40.993541 | orchestrator | 2025-09-23 07:39:40.993550 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-09-23 07:39:40.993560 | orchestrator | Tuesday 23 September 2025 07:37:21 +0000 (0:00:00.338) 0:00:07.284 ***** 2025-09-23 07:39:40.993569 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:39:40.993579 | orchestrator | 2025-09-23 07:39:40.993588 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-09-23 07:39:40.993598 | orchestrator | Tuesday 23 September 2025 07:37:22 +0000 (0:00:00.343) 0:00:07.628 ***** 2025-09-23 07:39:40.993607 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:39:40.993617 | orchestrator | 2025-09-23 07:39:40.993626 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-23 07:39:40.993636 | orchestrator | Tuesday 23 September 2025 07:37:22 +0000 (0:00:00.386) 0:00:08.015 ***** 2025-09-23 07:39:40.993645 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:39:40.993655 | orchestrator | 2025-09-23 07:39:40.993664 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-23 07:39:40.993679 | orchestrator | Tuesday 23 September 2025 07:37:23 +0000 (0:00:00.656) 0:00:08.671 ***** 2025-09-23 07:39:40.993689 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:39:40.993699 | orchestrator | 2025-09-23 07:39:40.993708 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-09-23 07:39:40.993718 | orchestrator | Tuesday 23 September 2025 07:37:24 +0000 (0:00:00.983) 0:00:09.655 ***** 2025-09-23 07:39:40.993727 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:39:40.993737 | orchestrator | 2025-09-23 07:39:40.993746 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-09-23 07:39:40.993756 | orchestrator | Tuesday 23 September 2025 07:37:24 +0000 (0:00:00.418) 0:00:10.073 ***** 2025-09-23 07:39:40.993765 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:39:40.993775 | orchestrator | 2025-09-23 07:39:40.993784 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-09-23 07:39:40.993793 | orchestrator | Tuesday 23 September 2025 07:37:24 +0000 (0:00:00.329) 0:00:10.404 ***** 2025-09-23 07:39:40.993837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-23 07:39:40.993853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-23 07:39:40.993872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-23 07:39:40.993883 | orchestrator | 2025-09-23 07:39:40.993893 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-09-23 07:39:40.993903 | orchestrator | Tuesday 23 September 2025 07:37:26 +0000 (0:00:01.377) 0:00:11.781 ***** 2025-09-23 07:39:40.993921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-23 07:39:40.993936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-23 07:39:40.993954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-23 07:39:40.993965 | orchestrator | 2025-09-23 07:39:40.993974 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-09-23 07:39:40.993984 | orchestrator | Tuesday 23 September 2025 07:37:30 +0000 (0:00:04.396) 0:00:16.178 ***** 2025-09-23 07:39:40.993993 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-23 07:39:40.994003 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-23 07:39:40.994013 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-23 07:39:40.994066 | orchestrator | 2025-09-23 07:39:40.994076 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-09-23 07:39:40.994086 | orchestrator | Tuesday 23 September 2025 07:37:32 +0000 (0:00:01.812) 0:00:17.990 ***** 2025-09-23 07:39:40.994095 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-23 07:39:40.994105 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-23 07:39:40.994114 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-23 07:39:40.994124 | orchestrator | 2025-09-23 07:39:40.994133 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-09-23 07:39:40.994148 | orchestrator | Tuesday 23 September 2025 07:37:34 +0000 (0:00:02.606) 0:00:20.596 ***** 2025-09-23 07:39:40.994159 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-23 07:39:40.994168 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-23 07:39:40.994177 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-23 07:39:40.994187 | orchestrator | 2025-09-23 07:39:40.994196 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-09-23 07:39:40.994206 | orchestrator | Tuesday 23 September 2025 07:37:36 +0000 (0:00:01.646) 0:00:22.243 ***** 2025-09-23 07:39:40.994215 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-23 07:39:40.994224 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-23 07:39:40.994234 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-23 07:39:40.994244 | orchestrator | 2025-09-23 07:39:40.994253 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-09-23 07:39:40.994263 | orchestrator | Tuesday 23 September 2025 07:37:39 +0000 (0:00:02.542) 0:00:24.785 ***** 2025-09-23 07:39:40.994279 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-23 07:39:40.994288 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-23 07:39:40.994298 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-23 07:39:40.994307 | orchestrator | 2025-09-23 07:39:40.994321 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-09-23 07:39:40.994331 | orchestrator | Tuesday 23 September 2025 07:37:41 +0000 (0:00:02.428) 0:00:27.214 ***** 2025-09-23 07:39:40.994341 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-23 07:39:40.994350 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-23 07:39:40.994360 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-23 07:39:40.994369 | orchestrator | 2025-09-23 07:39:40.994379 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-23 07:39:40.994388 | orchestrator | Tuesday 23 September 2025 07:37:44 +0000 (0:00:02.645) 0:00:29.860 ***** 2025-09-23 07:39:40.994398 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:39:40.994407 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:39:40.994417 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:39:40.994426 | orchestrator | 2025-09-23 07:39:40.994435 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-09-23 07:39:40.994445 | orchestrator | Tuesday 23 September 2025 07:37:45 +0000 (0:00:00.788) 0:00:30.648 ***** 2025-09-23 07:39:40.994455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-23 07:39:40.994472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-23 07:39:40.994483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-23 07:39:40.994499 | orchestrator | 2025-09-23 07:39:40.994513 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-09-23 07:39:40.994523 | orchestrator | Tuesday 23 September 2025 07:37:47 +0000 (0:00:02.544) 0:00:33.193 ***** 2025-09-23 07:39:40.994532 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:39:40.994542 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:39:40.994551 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:39:40.994561 | orchestrator | 2025-09-23 07:39:40.994570 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-09-23 07:39:40.994580 | orchestrator | Tuesday 23 September 2025 07:37:48 +0000 (0:00:01.056) 0:00:34.250 ***** 2025-09-23 07:39:40.994590 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:39:40.994599 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:39:40.994609 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:39:40.994618 | orchestrator | 2025-09-23 07:39:40.994628 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-09-23 07:39:40.994637 | orchestrator | Tuesday 23 September 2025 07:37:55 +0000 (0:00:06.611) 0:00:40.861 ***** 2025-09-23 07:39:40.994647 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:39:40.994656 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:39:40.994666 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:39:40.994675 | orchestrator | 2025-09-23 07:39:40.994684 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-23 07:39:40.994694 | orchestrator | 2025-09-23 07:39:40.994703 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-23 07:39:40.994713 | orchestrator | Tuesday 23 September 2025 07:37:55 +0000 (0:00:00.610) 0:00:41.471 ***** 2025-09-23 07:39:40.994723 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:39:40.994732 | orchestrator | 2025-09-23 07:39:40.994741 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-23 07:39:40.994751 | orchestrator | Tuesday 23 September 2025 07:37:56 +0000 (0:00:00.617) 0:00:42.089 ***** 2025-09-23 07:39:40.994760 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:39:40.994770 | orchestrator | 2025-09-23 07:39:40.994779 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-23 07:39:40.994789 | orchestrator | Tuesday 23 September 2025 07:37:56 +0000 (0:00:00.240) 0:00:42.329 ***** 2025-09-23 07:39:40.994798 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:39:40.994823 | orchestrator | 2025-09-23 07:39:40.994833 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-23 07:39:40.994843 | orchestrator | Tuesday 23 September 2025 07:37:58 +0000 (0:00:02.254) 0:00:44.584 ***** 2025-09-23 07:39:40.994853 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:39:40.994862 | orchestrator | 2025-09-23 07:39:40.994872 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-23 07:39:40.994881 | orchestrator | 2025-09-23 07:39:40.994891 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-23 07:39:40.994900 | orchestrator | Tuesday 23 September 2025 07:38:56 +0000 (0:00:57.791) 0:01:42.375 ***** 2025-09-23 07:39:40.994915 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:39:40.994925 | orchestrator | 2025-09-23 07:39:40.994935 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-23 07:39:40.994944 | orchestrator | Tuesday 23 September 2025 07:38:57 +0000 (0:00:00.606) 0:01:42.982 ***** 2025-09-23 07:39:40.994954 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:39:40.994964 | orchestrator | 2025-09-23 07:39:40.994973 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-23 07:39:40.994982 | orchestrator | Tuesday 23 September 2025 07:38:57 +0000 (0:00:00.198) 0:01:43.180 ***** 2025-09-23 07:39:40.994992 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:39:40.995001 | orchestrator | 2025-09-23 07:39:40.995011 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-23 07:39:40.995020 | orchestrator | Tuesday 23 September 2025 07:38:59 +0000 (0:00:01.686) 0:01:44.867 ***** 2025-09-23 07:39:40.995030 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:39:40.995040 | orchestrator | 2025-09-23 07:39:40.995049 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-23 07:39:40.995059 | orchestrator | 2025-09-23 07:39:40.995068 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-23 07:39:40.995078 | orchestrator | Tuesday 23 September 2025 07:39:17 +0000 (0:00:17.780) 0:02:02.647 ***** 2025-09-23 07:39:40.995088 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:39:40.995097 | orchestrator | 2025-09-23 07:39:40.995112 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-23 07:39:40.995122 | orchestrator | Tuesday 23 September 2025 07:39:17 +0000 (0:00:00.608) 0:02:03.256 ***** 2025-09-23 07:39:40.995132 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:39:40.995141 | orchestrator | 2025-09-23 07:39:40.995151 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-23 07:39:40.995160 | orchestrator | Tuesday 23 September 2025 07:39:17 +0000 (0:00:00.236) 0:02:03.492 ***** 2025-09-23 07:39:40.995170 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:39:40.995179 | orchestrator | 2025-09-23 07:39:40.995189 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-23 07:39:40.995198 | orchestrator | Tuesday 23 September 2025 07:39:19 +0000 (0:00:01.686) 0:02:05.179 ***** 2025-09-23 07:39:40.995208 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:39:40.995217 | orchestrator | 2025-09-23 07:39:40.995227 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-09-23 07:39:40.995236 | orchestrator | 2025-09-23 07:39:40.995246 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-09-23 07:39:40.995255 | orchestrator | Tuesday 23 September 2025 07:39:35 +0000 (0:00:16.103) 0:02:21.282 ***** 2025-09-23 07:39:40.995265 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:39:40.995274 | orchestrator | 2025-09-23 07:39:40.995283 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-09-23 07:39:40.995293 | orchestrator | Tuesday 23 September 2025 07:39:36 +0000 (0:00:00.511) 0:02:21.794 ***** 2025-09-23 07:39:40.995302 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-23 07:39:40.995312 | orchestrator | enable_outward_rabbitmq_True 2025-09-23 07:39:40.995321 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-23 07:39:40.995340 | orchestrator | outward_rabbitmq_restart 2025-09-23 07:39:40.995350 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:39:40.995359 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:39:40.995369 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:39:40.995378 | orchestrator | 2025-09-23 07:39:40.995388 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-09-23 07:39:40.995398 | orchestrator | skipping: no hosts matched 2025-09-23 07:39:40.995407 | orchestrator | 2025-09-23 07:39:40.995417 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-09-23 07:39:40.995426 | orchestrator | skipping: no hosts matched 2025-09-23 07:39:40.995441 | orchestrator | 2025-09-23 07:39:40.995451 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-09-23 07:39:40.995460 | orchestrator | skipping: no hosts matched 2025-09-23 07:39:40.995470 | orchestrator | 2025-09-23 07:39:40.995480 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:39:40.995490 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-23 07:39:40.995500 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-23 07:39:40.995509 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 07:39:40.995519 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 07:39:40.995529 | orchestrator | 2025-09-23 07:39:40.995538 | orchestrator | 2025-09-23 07:39:40.995548 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:39:40.995558 | orchestrator | Tuesday 23 September 2025 07:39:38 +0000 (0:00:02.550) 0:02:24.344 ***** 2025-09-23 07:39:40.995567 | orchestrator | =============================================================================== 2025-09-23 07:39:40.995577 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 91.67s 2025-09-23 07:39:40.995586 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.61s 2025-09-23 07:39:40.995596 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.63s 2025-09-23 07:39:40.995605 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 4.40s 2025-09-23 07:39:40.995615 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.46s 2025-09-23 07:39:40.995624 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.65s 2025-09-23 07:39:40.995633 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.61s 2025-09-23 07:39:40.995643 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.55s 2025-09-23 07:39:40.995652 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.54s 2025-09-23 07:39:40.995662 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.54s 2025-09-23 07:39:40.995671 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.43s 2025-09-23 07:39:40.995681 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.83s 2025-09-23 07:39:40.995690 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.81s 2025-09-23 07:39:40.995700 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.65s 2025-09-23 07:39:40.995710 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.38s 2025-09-23 07:39:40.995719 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.18s 2025-09-23 07:39:40.995729 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.06s 2025-09-23 07:39:40.995743 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.98s 2025-09-23 07:39:40.995753 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.79s 2025-09-23 07:39:40.995762 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.68s 2025-09-23 07:39:40.995772 | orchestrator | 2025-09-23 07:39:40 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:39:44.028207 | orchestrator | 2025-09-23 07:39:44 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:39:44.028299 | orchestrator | 2025-09-23 07:39:44 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:39:44.028502 | orchestrator | 2025-09-23 07:39:44 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:39:44.029003 | orchestrator | 2025-09-23 07:39:44 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:39:47.062603 | orchestrator | 2025-09-23 07:39:47 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:39:47.065422 | orchestrator | 2025-09-23 07:39:47 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:39:47.065892 | orchestrator | 2025-09-23 07:39:47 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:39:47.066173 | orchestrator | 2025-09-23 07:39:47 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:39:50.093813 | orchestrator | 2025-09-23 07:39:50 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:39:50.094520 | orchestrator | 2025-09-23 07:39:50 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:39:50.095599 | orchestrator | 2025-09-23 07:39:50 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:39:50.095634 | orchestrator | 2025-09-23 07:39:50 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:39:53.124490 | orchestrator | 2025-09-23 07:39:53 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:39:53.125465 | orchestrator | 2025-09-23 07:39:53 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:39:53.127417 | orchestrator | 2025-09-23 07:39:53 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:39:53.127445 | orchestrator | 2025-09-23 07:39:53 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:39:56.197968 | orchestrator | 2025-09-23 07:39:56 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:39:56.199167 | orchestrator | 2025-09-23 07:39:56 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:39:56.201075 | orchestrator | 2025-09-23 07:39:56 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:39:56.201108 | orchestrator | 2025-09-23 07:39:56 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:39:59.244027 | orchestrator | 2025-09-23 07:39:59 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:39:59.246532 | orchestrator | 2025-09-23 07:39:59 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:39:59.248671 | orchestrator | 2025-09-23 07:39:59 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:39:59.248833 | orchestrator | 2025-09-23 07:39:59 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:40:02.296151 | orchestrator | 2025-09-23 07:40:02 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:40:02.297844 | orchestrator | 2025-09-23 07:40:02 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:40:02.299526 | orchestrator | 2025-09-23 07:40:02 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:40:02.299579 | orchestrator | 2025-09-23 07:40:02 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:40:05.345317 | orchestrator | 2025-09-23 07:40:05 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:40:05.346956 | orchestrator | 2025-09-23 07:40:05 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:40:05.348651 | orchestrator | 2025-09-23 07:40:05 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:40:05.348724 | orchestrator | 2025-09-23 07:40:05 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:40:08.386675 | orchestrator | 2025-09-23 07:40:08 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:40:08.388697 | orchestrator | 2025-09-23 07:40:08 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:40:08.389661 | orchestrator | 2025-09-23 07:40:08 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:40:08.390287 | orchestrator | 2025-09-23 07:40:08 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:40:11.429157 | orchestrator | 2025-09-23 07:40:11 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:40:11.429238 | orchestrator | 2025-09-23 07:40:11 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:40:11.429252 | orchestrator | 2025-09-23 07:40:11 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:40:11.429263 | orchestrator | 2025-09-23 07:40:11 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:40:14.484215 | orchestrator | 2025-09-23 07:40:14 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:40:14.486633 | orchestrator | 2025-09-23 07:40:14 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:40:14.489001 | orchestrator | 2025-09-23 07:40:14 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:40:14.490104 | orchestrator | 2025-09-23 07:40:14 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:40:17.545643 | orchestrator | 2025-09-23 07:40:17 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:40:17.546659 | orchestrator | 2025-09-23 07:40:17 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:40:17.548420 | orchestrator | 2025-09-23 07:40:17 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:40:17.548553 | orchestrator | 2025-09-23 07:40:17 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:40:20.599683 | orchestrator | 2025-09-23 07:40:20 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:40:20.599882 | orchestrator | 2025-09-23 07:40:20 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:40:20.601175 | orchestrator | 2025-09-23 07:40:20 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state STARTED 2025-09-23 07:40:20.601234 | orchestrator | 2025-09-23 07:40:20 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:40:23.638705 | orchestrator | 2025-09-23 07:40:23 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:40:23.639714 | orchestrator | 2025-09-23 07:40:23 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:40:23.642175 | orchestrator | 2025-09-23 07:40:23 | INFO  | Task 5f4fa2e7-43a0-482b-aa2b-e5d3080c0dc3 is in state SUCCESS 2025-09-23 07:40:23.645657 | orchestrator | 2025-09-23 07:40:23.645733 | orchestrator | 2025-09-23 07:40:23.645781 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-23 07:40:23.645794 | orchestrator | 2025-09-23 07:40:23.645805 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-23 07:40:23.645817 | orchestrator | Tuesday 23 September 2025 07:38:03 +0000 (0:00:00.208) 0:00:00.208 ***** 2025-09-23 07:40:23.645828 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:40:23.645840 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:40:23.645924 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:40:23.645964 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:40:23.645976 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:40:23.645986 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:40:23.645997 | orchestrator | 2025-09-23 07:40:23.646008 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-23 07:40:23.646073 | orchestrator | Tuesday 23 September 2025 07:38:04 +0000 (0:00:01.076) 0:00:01.285 ***** 2025-09-23 07:40:23.646088 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-09-23 07:40:23.646099 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-09-23 07:40:23.646110 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-09-23 07:40:23.646121 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-09-23 07:40:23.646132 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-09-23 07:40:23.646143 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-09-23 07:40:23.646154 | orchestrator | 2025-09-23 07:40:23.646165 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-09-23 07:40:23.646175 | orchestrator | 2025-09-23 07:40:23.646186 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-09-23 07:40:23.646197 | orchestrator | Tuesday 23 September 2025 07:38:05 +0000 (0:00:01.257) 0:00:02.542 ***** 2025-09-23 07:40:23.646210 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:40:23.646222 | orchestrator | 2025-09-23 07:40:23.646233 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-09-23 07:40:23.646244 | orchestrator | Tuesday 23 September 2025 07:38:07 +0000 (0:00:01.146) 0:00:03.689 ***** 2025-09-23 07:40:23.646258 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.646275 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.646289 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.646330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.646344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.646357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.646378 | orchestrator | 2025-09-23 07:40:23.646406 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-09-23 07:40:23.646419 | orchestrator | Tuesday 23 September 2025 07:38:08 +0000 (0:00:01.152) 0:00:04.841 ***** 2025-09-23 07:40:23.646432 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.646445 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.646457 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.646470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.646483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.646496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.646509 | orchestrator | 2025-09-23 07:40:23.646521 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-09-23 07:40:23.646534 | orchestrator | Tuesday 23 September 2025 07:38:10 +0000 (0:00:01.849) 0:00:06.691 ***** 2025-09-23 07:40:23.646546 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.646566 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.646662 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.646686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.646698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.646709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.646720 | orchestrator | 2025-09-23 07:40:23.646731 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-09-23 07:40:23.646765 | orchestrator | Tuesday 23 September 2025 07:38:11 +0000 (0:00:01.852) 0:00:08.544 ***** 2025-09-23 07:40:23.646777 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.646788 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.646799 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.646814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.646833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.646845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.646856 | orchestrator | 2025-09-23 07:40:23.646875 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-09-23 07:40:23.646886 | orchestrator | Tuesday 23 September 2025 07:38:13 +0000 (0:00:01.697) 0:00:10.242 ***** 2025-09-23 07:40:23.646897 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.646909 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.646920 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.646931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.646942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.646953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.646970 | orchestrator | 2025-09-23 07:40:23.646981 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-09-23 07:40:23.646997 | orchestrator | Tuesday 23 September 2025 07:38:15 +0000 (0:00:01.594) 0:00:11.836 ***** 2025-09-23 07:40:23.647009 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:40:23.647020 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:40:23.647031 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:40:23.647041 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:40:23.647052 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:40:23.647063 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:40:23.647074 | orchestrator | 2025-09-23 07:40:23.647084 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-09-23 07:40:23.647095 | orchestrator | Tuesday 23 September 2025 07:38:17 +0000 (0:00:02.552) 0:00:14.389 ***** 2025-09-23 07:40:23.647106 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-09-23 07:40:23.647117 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-09-23 07:40:23.647128 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-09-23 07:40:23.647138 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-09-23 07:40:23.647149 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-09-23 07:40:23.647159 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-09-23 07:40:23.647170 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-23 07:40:23.647181 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-23 07:40:23.647197 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-23 07:40:23.647208 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-23 07:40:23.647219 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-23 07:40:23.647230 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-23 07:40:23.647241 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-23 07:40:23.647253 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-23 07:40:23.647263 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-23 07:40:23.647275 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-23 07:40:23.647286 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-23 07:40:23.647296 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-23 07:40:23.647307 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-23 07:40:23.647319 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-23 07:40:23.647330 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-23 07:40:23.647340 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-23 07:40:23.647351 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-23 07:40:23.647368 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-23 07:40:23.647379 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-23 07:40:23.647390 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-23 07:40:23.647401 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-23 07:40:23.647411 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-23 07:40:23.647422 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-23 07:40:23.647433 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-23 07:40:23.647444 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-23 07:40:23.647454 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-23 07:40:23.647465 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-23 07:40:23.647476 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-23 07:40:23.647492 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-23 07:40:23.647503 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-23 07:40:23.647513 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-23 07:40:23.647524 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-23 07:40:23.647535 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-23 07:40:23.647546 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-23 07:40:23.647557 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-23 07:40:23.647568 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-23 07:40:23.647578 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-09-23 07:40:23.647590 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-09-23 07:40:23.647606 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-09-23 07:40:23.647617 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-09-23 07:40:23.647628 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-09-23 07:40:23.647638 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-23 07:40:23.647649 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-09-23 07:40:23.647660 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-23 07:40:23.647670 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-23 07:40:23.647681 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-23 07:40:23.647698 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-23 07:40:23.647709 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-23 07:40:23.647720 | orchestrator | 2025-09-23 07:40:23.647730 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-23 07:40:23.647761 | orchestrator | Tuesday 23 September 2025 07:38:37 +0000 (0:00:19.323) 0:00:33.712 ***** 2025-09-23 07:40:23.647772 | orchestrator | 2025-09-23 07:40:23.647783 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-23 07:40:23.647794 | orchestrator | Tuesday 23 September 2025 07:38:37 +0000 (0:00:00.184) 0:00:33.897 ***** 2025-09-23 07:40:23.647805 | orchestrator | 2025-09-23 07:40:23.647815 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-23 07:40:23.647826 | orchestrator | Tuesday 23 September 2025 07:38:37 +0000 (0:00:00.062) 0:00:33.960 ***** 2025-09-23 07:40:23.647837 | orchestrator | 2025-09-23 07:40:23.647848 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-23 07:40:23.647858 | orchestrator | Tuesday 23 September 2025 07:38:37 +0000 (0:00:00.061) 0:00:34.022 ***** 2025-09-23 07:40:23.647869 | orchestrator | 2025-09-23 07:40:23.647880 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-23 07:40:23.647891 | orchestrator | Tuesday 23 September 2025 07:38:37 +0000 (0:00:00.061) 0:00:34.083 ***** 2025-09-23 07:40:23.647902 | orchestrator | 2025-09-23 07:40:23.647912 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-23 07:40:23.647923 | orchestrator | Tuesday 23 September 2025 07:38:37 +0000 (0:00:00.060) 0:00:34.144 ***** 2025-09-23 07:40:23.647934 | orchestrator | 2025-09-23 07:40:23.647945 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-09-23 07:40:23.647955 | orchestrator | Tuesday 23 September 2025 07:38:37 +0000 (0:00:00.061) 0:00:34.205 ***** 2025-09-23 07:40:23.647966 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:40:23.647977 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:40:23.647988 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:40:23.647999 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:40:23.648009 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:40:23.648020 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:40:23.648030 | orchestrator | 2025-09-23 07:40:23.648041 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-09-23 07:40:23.648052 | orchestrator | Tuesday 23 September 2025 07:38:39 +0000 (0:00:01.775) 0:00:35.981 ***** 2025-09-23 07:40:23.648063 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:40:23.648074 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:40:23.648084 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:40:23.648100 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:40:23.648111 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:40:23.648121 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:40:23.648132 | orchestrator | 2025-09-23 07:40:23.648143 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-09-23 07:40:23.648154 | orchestrator | 2025-09-23 07:40:23.648165 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-23 07:40:23.648175 | orchestrator | Tuesday 23 September 2025 07:39:11 +0000 (0:00:32.022) 0:01:08.003 ***** 2025-09-23 07:40:23.648186 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:40:23.648197 | orchestrator | 2025-09-23 07:40:23.648208 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-23 07:40:23.648218 | orchestrator | Tuesday 23 September 2025 07:39:12 +0000 (0:00:00.655) 0:01:08.658 ***** 2025-09-23 07:40:23.648229 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:40:23.648250 | orchestrator | 2025-09-23 07:40:23.648261 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-09-23 07:40:23.648271 | orchestrator | Tuesday 23 September 2025 07:39:12 +0000 (0:00:00.491) 0:01:09.150 ***** 2025-09-23 07:40:23.648282 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:40:23.648293 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:40:23.648303 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:40:23.648314 | orchestrator | 2025-09-23 07:40:23.648325 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-09-23 07:40:23.648336 | orchestrator | Tuesday 23 September 2025 07:39:13 +0000 (0:00:00.941) 0:01:10.092 ***** 2025-09-23 07:40:23.648347 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:40:23.648357 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:40:23.648368 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:40:23.648384 | orchestrator | 2025-09-23 07:40:23.648395 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-09-23 07:40:23.648406 | orchestrator | Tuesday 23 September 2025 07:39:13 +0000 (0:00:00.366) 0:01:10.459 ***** 2025-09-23 07:40:23.648417 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:40:23.648427 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:40:23.648438 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:40:23.648448 | orchestrator | 2025-09-23 07:40:23.648459 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-09-23 07:40:23.648470 | orchestrator | Tuesday 23 September 2025 07:39:14 +0000 (0:00:00.440) 0:01:10.899 ***** 2025-09-23 07:40:23.648481 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:40:23.648491 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:40:23.648502 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:40:23.648512 | orchestrator | 2025-09-23 07:40:23.648523 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-09-23 07:40:23.648534 | orchestrator | Tuesday 23 September 2025 07:39:14 +0000 (0:00:00.396) 0:01:11.296 ***** 2025-09-23 07:40:23.648544 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:40:23.648555 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:40:23.648565 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:40:23.648576 | orchestrator | 2025-09-23 07:40:23.648587 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-09-23 07:40:23.648597 | orchestrator | Tuesday 23 September 2025 07:39:15 +0000 (0:00:00.543) 0:01:11.840 ***** 2025-09-23 07:40:23.648608 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:40:23.648619 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:40:23.648630 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:40:23.648640 | orchestrator | 2025-09-23 07:40:23.648651 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-09-23 07:40:23.648662 | orchestrator | Tuesday 23 September 2025 07:39:15 +0000 (0:00:00.302) 0:01:12.143 ***** 2025-09-23 07:40:23.648672 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:40:23.648683 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:40:23.648693 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:40:23.648704 | orchestrator | 2025-09-23 07:40:23.648715 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-09-23 07:40:23.648726 | orchestrator | Tuesday 23 September 2025 07:39:15 +0000 (0:00:00.302) 0:01:12.445 ***** 2025-09-23 07:40:23.648794 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:40:23.648807 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:40:23.648818 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:40:23.648829 | orchestrator | 2025-09-23 07:40:23.648840 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-09-23 07:40:23.648850 | orchestrator | Tuesday 23 September 2025 07:39:16 +0000 (0:00:00.308) 0:01:12.754 ***** 2025-09-23 07:40:23.648861 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:40:23.648871 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:40:23.648882 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:40:23.648893 | orchestrator | 2025-09-23 07:40:23.648909 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-09-23 07:40:23.648920 | orchestrator | Tuesday 23 September 2025 07:39:16 +0000 (0:00:00.518) 0:01:13.273 ***** 2025-09-23 07:40:23.648930 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:40:23.648941 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:40:23.648952 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:40:23.648962 | orchestrator | 2025-09-23 07:40:23.648973 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-09-23 07:40:23.648984 | orchestrator | Tuesday 23 September 2025 07:39:17 +0000 (0:00:00.355) 0:01:13.628 ***** 2025-09-23 07:40:23.648995 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:40:23.649005 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:40:23.649016 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:40:23.649026 | orchestrator | 2025-09-23 07:40:23.649037 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-09-23 07:40:23.649047 | orchestrator | Tuesday 23 September 2025 07:39:17 +0000 (0:00:00.299) 0:01:13.928 ***** 2025-09-23 07:40:23.649058 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:40:23.649067 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:40:23.649076 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:40:23.649086 | orchestrator | 2025-09-23 07:40:23.649095 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-09-23 07:40:23.649110 | orchestrator | Tuesday 23 September 2025 07:39:17 +0000 (0:00:00.286) 0:01:14.215 ***** 2025-09-23 07:40:23.649119 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:40:23.649129 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:40:23.649138 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:40:23.649148 | orchestrator | 2025-09-23 07:40:23.649157 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-09-23 07:40:23.649167 | orchestrator | Tuesday 23 September 2025 07:39:17 +0000 (0:00:00.308) 0:01:14.523 ***** 2025-09-23 07:40:23.649176 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:40:23.649185 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:40:23.649195 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:40:23.649204 | orchestrator | 2025-09-23 07:40:23.649213 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-09-23 07:40:23.649223 | orchestrator | Tuesday 23 September 2025 07:39:18 +0000 (0:00:00.528) 0:01:15.052 ***** 2025-09-23 07:40:23.649232 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:40:23.649242 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:40:23.649251 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:40:23.649261 | orchestrator | 2025-09-23 07:40:23.649270 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-09-23 07:40:23.649286 | orchestrator | Tuesday 23 September 2025 07:39:18 +0000 (0:00:00.315) 0:01:15.368 ***** 2025-09-23 07:40:23.649303 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:40:23.649321 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:40:23.649337 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:40:23.649354 | orchestrator | 2025-09-23 07:40:23.649374 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-09-23 07:40:23.649397 | orchestrator | Tuesday 23 September 2025 07:39:19 +0000 (0:00:00.302) 0:01:15.670 ***** 2025-09-23 07:40:23.649415 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:40:23.649433 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:40:23.649461 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:40:23.649479 | orchestrator | 2025-09-23 07:40:23.649493 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-23 07:40:23.649503 | orchestrator | Tuesday 23 September 2025 07:39:19 +0000 (0:00:00.280) 0:01:15.951 ***** 2025-09-23 07:40:23.649512 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:40:23.649522 | orchestrator | 2025-09-23 07:40:23.649531 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-09-23 07:40:23.649551 | orchestrator | Tuesday 23 September 2025 07:39:20 +0000 (0:00:00.770) 0:01:16.722 ***** 2025-09-23 07:40:23.649560 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:40:23.649570 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:40:23.649579 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:40:23.649589 | orchestrator | 2025-09-23 07:40:23.649599 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-09-23 07:40:23.649616 | orchestrator | Tuesday 23 September 2025 07:39:20 +0000 (0:00:00.406) 0:01:17.128 ***** 2025-09-23 07:40:23.649632 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:40:23.649647 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:40:23.649667 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:40:23.649687 | orchestrator | 2025-09-23 07:40:23.649702 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-09-23 07:40:23.649718 | orchestrator | Tuesday 23 September 2025 07:39:20 +0000 (0:00:00.423) 0:01:17.552 ***** 2025-09-23 07:40:23.649734 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:40:23.649775 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:40:23.649793 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:40:23.649804 | orchestrator | 2025-09-23 07:40:23.649813 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-09-23 07:40:23.649823 | orchestrator | Tuesday 23 September 2025 07:39:21 +0000 (0:00:00.435) 0:01:17.987 ***** 2025-09-23 07:40:23.649832 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:40:23.649842 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:40:23.649851 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:40:23.649861 | orchestrator | 2025-09-23 07:40:23.649870 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-09-23 07:40:23.649880 | orchestrator | Tuesday 23 September 2025 07:39:21 +0000 (0:00:00.294) 0:01:18.282 ***** 2025-09-23 07:40:23.649889 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:40:23.649899 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:40:23.649908 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:40:23.649921 | orchestrator | 2025-09-23 07:40:23.649937 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-09-23 07:40:23.649953 | orchestrator | Tuesday 23 September 2025 07:39:22 +0000 (0:00:00.314) 0:01:18.596 ***** 2025-09-23 07:40:23.649968 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:40:23.649983 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:40:23.649999 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:40:23.650077 | orchestrator | 2025-09-23 07:40:23.650092 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-09-23 07:40:23.650102 | orchestrator | Tuesday 23 September 2025 07:39:22 +0000 (0:00:00.454) 0:01:19.051 ***** 2025-09-23 07:40:23.650111 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:40:23.650121 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:40:23.650130 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:40:23.650139 | orchestrator | 2025-09-23 07:40:23.650149 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-09-23 07:40:23.650159 | orchestrator | Tuesday 23 September 2025 07:39:23 +0000 (0:00:00.677) 0:01:19.728 ***** 2025-09-23 07:40:23.650168 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:40:23.650178 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:40:23.650187 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:40:23.650196 | orchestrator | 2025-09-23 07:40:23.650206 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-23 07:40:23.650216 | orchestrator | Tuesday 23 September 2025 07:39:23 +0000 (0:00:00.301) 0:01:20.030 ***** 2025-09-23 07:40:23.650233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.650258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.650268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.650286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla2025-09-23 07:40:23 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:40:23.650656 | orchestrator | _logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.650822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.650843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.650855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.650867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.650878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.650889 | orchestrator | 2025-09-23 07:40:23.650901 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-23 07:40:23.650913 | orchestrator | Tuesday 23 September 2025 07:39:24 +0000 (0:00:01.500) 0:01:21.530 ***** 2025-09-23 07:40:23.650924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.650977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.650989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.651000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.651028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.651040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.651052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.651063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.651074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.651085 | orchestrator | 2025-09-23 07:40:23.651096 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-23 07:40:23.651108 | orchestrator | Tuesday 23 September 2025 07:39:28 +0000 (0:00:03.874) 0:01:25.404 ***** 2025-09-23 07:40:23.651119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.651138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.651155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.651166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.651178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.651196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.651208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.651222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.651235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.651247 | orchestrator | 2025-09-23 07:40:23.651260 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-23 07:40:23.651273 | orchestrator | Tuesday 23 September 2025 07:39:30 +0000 (0:00:02.048) 0:01:27.452 ***** 2025-09-23 07:40:23.651285 | orchestrator | 2025-09-23 07:40:23.651298 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-23 07:40:23.651310 | orchestrator | Tuesday 23 September 2025 07:39:31 +0000 (0:00:00.217) 0:01:27.670 ***** 2025-09-23 07:40:23.651322 | orchestrator | 2025-09-23 07:40:23.651333 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-23 07:40:23.651350 | orchestrator | Tuesday 23 September 2025 07:39:31 +0000 (0:00:00.069) 0:01:27.739 ***** 2025-09-23 07:40:23.651361 | orchestrator | 2025-09-23 07:40:23.651371 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-23 07:40:23.651382 | orchestrator | Tuesday 23 September 2025 07:39:31 +0000 (0:00:00.062) 0:01:27.802 ***** 2025-09-23 07:40:23.651393 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:40:23.651404 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:40:23.651414 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:40:23.651425 | orchestrator | 2025-09-23 07:40:23.651436 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-23 07:40:23.651447 | orchestrator | Tuesday 23 September 2025 07:39:33 +0000 (0:00:02.644) 0:01:30.446 ***** 2025-09-23 07:40:23.651458 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:40:23.651469 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:40:23.651479 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:40:23.651490 | orchestrator | 2025-09-23 07:40:23.651501 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-23 07:40:23.651511 | orchestrator | Tuesday 23 September 2025 07:39:41 +0000 (0:00:07.649) 0:01:38.095 ***** 2025-09-23 07:40:23.651522 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:40:23.651533 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:40:23.651544 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:40:23.651554 | orchestrator | 2025-09-23 07:40:23.651565 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-23 07:40:23.651580 | orchestrator | Tuesday 23 September 2025 07:39:43 +0000 (0:00:02.458) 0:01:40.554 ***** 2025-09-23 07:40:23.651591 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:40:23.651602 | orchestrator | 2025-09-23 07:40:23.651613 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-23 07:40:23.651623 | orchestrator | Tuesday 23 September 2025 07:39:44 +0000 (0:00:00.106) 0:01:40.661 ***** 2025-09-23 07:40:23.651634 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:40:23.651645 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:40:23.651656 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:40:23.651666 | orchestrator | 2025-09-23 07:40:23.651677 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-23 07:40:23.651688 | orchestrator | Tuesday 23 September 2025 07:39:45 +0000 (0:00:01.067) 0:01:41.729 ***** 2025-09-23 07:40:23.651699 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:40:23.651710 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:40:23.651721 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:40:23.651731 | orchestrator | 2025-09-23 07:40:23.651761 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-23 07:40:23.651772 | orchestrator | Tuesday 23 September 2025 07:39:45 +0000 (0:00:00.635) 0:01:42.365 ***** 2025-09-23 07:40:23.651783 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:40:23.651793 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:40:23.651804 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:40:23.651815 | orchestrator | 2025-09-23 07:40:23.651826 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-23 07:40:23.651836 | orchestrator | Tuesday 23 September 2025 07:39:46 +0000 (0:00:00.720) 0:01:43.085 ***** 2025-09-23 07:40:23.651847 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:40:23.651858 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:40:23.651868 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:40:23.651880 | orchestrator | 2025-09-23 07:40:23.651891 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-23 07:40:23.651902 | orchestrator | Tuesday 23 September 2025 07:39:47 +0000 (0:00:00.630) 0:01:43.716 ***** 2025-09-23 07:40:23.651913 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:40:23.651923 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:40:23.651941 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:40:23.651952 | orchestrator | 2025-09-23 07:40:23.651963 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-23 07:40:23.651982 | orchestrator | Tuesday 23 September 2025 07:39:48 +0000 (0:00:01.036) 0:01:44.752 ***** 2025-09-23 07:40:23.651993 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:40:23.652004 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:40:23.652015 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:40:23.652025 | orchestrator | 2025-09-23 07:40:23.652037 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-09-23 07:40:23.652047 | orchestrator | Tuesday 23 September 2025 07:39:48 +0000 (0:00:00.772) 0:01:45.525 ***** 2025-09-23 07:40:23.652058 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:40:23.652068 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:40:23.652079 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:40:23.652090 | orchestrator | 2025-09-23 07:40:23.652101 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-23 07:40:23.652112 | orchestrator | Tuesday 23 September 2025 07:39:49 +0000 (0:00:00.268) 0:01:45.794 ***** 2025-09-23 07:40:23.652123 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.652135 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.652147 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.652158 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.652171 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.652187 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.652199 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.652211 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.652237 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.652249 | orchestrator | 2025-09-23 07:40:23.652260 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-23 07:40:23.652271 | orchestrator | Tuesday 23 September 2025 07:39:50 +0000 (0:00:01.394) 0:01:47.188 ***** 2025-09-23 07:40:23.652282 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.652294 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.652305 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.652317 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.652328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.652339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.652356 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.652367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.652385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.652396 | orchestrator | 2025-09-23 07:40:23.652407 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-23 07:40:23.652418 | orchestrator | Tuesday 23 September 2025 07:39:54 +0000 (0:00:03.947) 0:01:51.136 ***** 2025-09-23 07:40:23.652435 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.652447 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.652459 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.652470 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.652482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.652493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.652504 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.652520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.652538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:40:23.652549 | orchestrator | 2025-09-23 07:40:23.652561 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-23 07:40:23.652571 | orchestrator | Tuesday 23 September 2025 07:39:57 +0000 (0:00:03.303) 0:01:54.440 ***** 2025-09-23 07:40:23.652583 | orchestrator | 2025-09-23 07:40:23.652594 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-23 07:40:23.652605 | orchestrator | Tuesday 23 September 2025 07:39:57 +0000 (0:00:00.095) 0:01:54.536 ***** 2025-09-23 07:40:23.652615 | orchestrator | 2025-09-23 07:40:23.652626 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-23 07:40:23.652637 | orchestrator | Tuesday 23 September 2025 07:39:58 +0000 (0:00:00.077) 0:01:54.613 ***** 2025-09-23 07:40:23.652647 | orchestrator | 2025-09-23 07:40:23.652658 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-23 07:40:23.652669 | orchestrator | Tuesday 23 September 2025 07:39:58 +0000 (0:00:00.080) 0:01:54.694 ***** 2025-09-23 07:40:23.652680 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:40:23.652690 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:40:23.652702 | orchestrator | 2025-09-23 07:40:23.652718 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-23 07:40:23.652729 | orchestrator | Tuesday 23 September 2025 07:40:04 +0000 (0:00:06.270) 0:02:00.964 ***** 2025-09-23 07:40:23.652781 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:40:23.652793 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:40:23.652804 | orchestrator | 2025-09-23 07:40:23.652815 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-23 07:40:23.652826 | orchestrator | Tuesday 23 September 2025 07:40:10 +0000 (0:00:06.218) 0:02:07.183 ***** 2025-09-23 07:40:23.652837 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:40:23.652848 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:40:23.652859 | orchestrator | 2025-09-23 07:40:23.652870 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-23 07:40:23.652881 | orchestrator | Tuesday 23 September 2025 07:40:17 +0000 (0:00:06.644) 0:02:13.827 ***** 2025-09-23 07:40:23.652892 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:40:23.652903 | orchestrator | 2025-09-23 07:40:23.652913 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-23 07:40:23.652924 | orchestrator | Tuesday 23 September 2025 07:40:17 +0000 (0:00:00.127) 0:02:13.955 ***** 2025-09-23 07:40:23.652935 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:40:23.652946 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:40:23.652957 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:40:23.652968 | orchestrator | 2025-09-23 07:40:23.652978 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-23 07:40:23.652989 | orchestrator | Tuesday 23 September 2025 07:40:18 +0000 (0:00:00.873) 0:02:14.828 ***** 2025-09-23 07:40:23.653000 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:40:23.653010 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:40:23.653021 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:40:23.653032 | orchestrator | 2025-09-23 07:40:23.653042 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-23 07:40:23.653054 | orchestrator | Tuesday 23 September 2025 07:40:18 +0000 (0:00:00.593) 0:02:15.422 ***** 2025-09-23 07:40:23.653065 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:40:23.653076 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:40:23.653086 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:40:23.653097 | orchestrator | 2025-09-23 07:40:23.653108 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-23 07:40:23.653126 | orchestrator | Tuesday 23 September 2025 07:40:19 +0000 (0:00:00.795) 0:02:16.218 ***** 2025-09-23 07:40:23.653137 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:40:23.653147 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:40:23.653158 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:40:23.653169 | orchestrator | 2025-09-23 07:40:23.653180 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-23 07:40:23.653191 | orchestrator | Tuesday 23 September 2025 07:40:20 +0000 (0:00:00.913) 0:02:17.131 ***** 2025-09-23 07:40:23.653202 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:40:23.653213 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:40:23.653224 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:40:23.653235 | orchestrator | 2025-09-23 07:40:23.653245 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-23 07:40:23.653257 | orchestrator | Tuesday 23 September 2025 07:40:21 +0000 (0:00:00.726) 0:02:17.858 ***** 2025-09-23 07:40:23.653268 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:40:23.653279 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:40:23.653290 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:40:23.653301 | orchestrator | 2025-09-23 07:40:23.653312 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:40:23.653323 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-23 07:40:23.653335 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-23 07:40:23.653359 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-23 07:40:23.653371 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:40:23.653382 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:40:23.653393 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:40:23.653404 | orchestrator | 2025-09-23 07:40:23.653415 | orchestrator | 2025-09-23 07:40:23.653426 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:40:23.653437 | orchestrator | Tuesday 23 September 2025 07:40:22 +0000 (0:00:00.829) 0:02:18.688 ***** 2025-09-23 07:40:23.653447 | orchestrator | =============================================================================== 2025-09-23 07:40:23.653458 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 32.02s 2025-09-23 07:40:23.653469 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.32s 2025-09-23 07:40:23.653479 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.87s 2025-09-23 07:40:23.653491 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 9.10s 2025-09-23 07:40:23.653501 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 8.91s 2025-09-23 07:40:23.653513 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.95s 2025-09-23 07:40:23.653523 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.87s 2025-09-23 07:40:23.653541 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.30s 2025-09-23 07:40:23.653553 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.55s 2025-09-23 07:40:23.653564 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.05s 2025-09-23 07:40:23.653575 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.85s 2025-09-23 07:40:23.653586 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.85s 2025-09-23 07:40:23.653606 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.78s 2025-09-23 07:40:23.653617 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.70s 2025-09-23 07:40:23.653628 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.59s 2025-09-23 07:40:23.653639 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.50s 2025-09-23 07:40:23.653650 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.39s 2025-09-23 07:40:23.653661 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.26s 2025-09-23 07:40:23.653672 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.15s 2025-09-23 07:40:23.653683 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.15s 2025-09-23 07:40:26.699247 | orchestrator | 2025-09-23 07:40:26 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:40:26.700878 | orchestrator | 2025-09-23 07:40:26 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:40:26.700911 | orchestrator | 2025-09-23 07:40:26 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:40:29.761682 | orchestrator | 2025-09-23 07:40:29 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:40:29.764176 | orchestrator | 2025-09-23 07:40:29 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:40:29.764226 | orchestrator | 2025-09-23 07:40:29 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:40:32.808309 | orchestrator | 2025-09-23 07:40:32 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:40:32.809024 | orchestrator | 2025-09-23 07:40:32 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:40:32.809446 | orchestrator | 2025-09-23 07:40:32 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:40:35.859199 | orchestrator | 2025-09-23 07:40:35 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:40:35.860364 | orchestrator | 2025-09-23 07:40:35 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:40:35.860977 | orchestrator | 2025-09-23 07:40:35 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:40:38.913996 | orchestrator | 2025-09-23 07:40:38 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:40:38.915362 | orchestrator | 2025-09-23 07:40:38 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:40:38.915682 | orchestrator | 2025-09-23 07:40:38 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:40:41.988768 | orchestrator | 2025-09-23 07:40:41 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:40:41.991130 | orchestrator | 2025-09-23 07:40:41 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:40:41.991325 | orchestrator | 2025-09-23 07:40:41 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:40:45.033224 | orchestrator | 2025-09-23 07:40:45 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:40:45.035628 | orchestrator | 2025-09-23 07:40:45 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:40:45.036023 | orchestrator | 2025-09-23 07:40:45 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:40:48.088475 | orchestrator | 2025-09-23 07:40:48 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:40:48.089158 | orchestrator | 2025-09-23 07:40:48 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:40:48.089221 | orchestrator | 2025-09-23 07:40:48 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:40:51.133811 | orchestrator | 2025-09-23 07:40:51 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:40:51.134506 | orchestrator | 2025-09-23 07:40:51 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:40:51.135484 | orchestrator | 2025-09-23 07:40:51 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:40:54.165603 | orchestrator | 2025-09-23 07:40:54 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:40:54.168499 | orchestrator | 2025-09-23 07:40:54 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:40:54.168538 | orchestrator | 2025-09-23 07:40:54 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:40:57.216243 | orchestrator | 2025-09-23 07:40:57 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:40:57.217691 | orchestrator | 2025-09-23 07:40:57 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:40:57.217970 | orchestrator | 2025-09-23 07:40:57 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:41:00.258574 | orchestrator | 2025-09-23 07:41:00 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:41:00.261072 | orchestrator | 2025-09-23 07:41:00 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:41:00.261900 | orchestrator | 2025-09-23 07:41:00 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:41:03.300362 | orchestrator | 2025-09-23 07:41:03 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:41:03.302127 | orchestrator | 2025-09-23 07:41:03 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:41:03.302205 | orchestrator | 2025-09-23 07:41:03 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:41:06.343908 | orchestrator | 2025-09-23 07:41:06 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:41:06.346218 | orchestrator | 2025-09-23 07:41:06 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:41:06.346337 | orchestrator | 2025-09-23 07:41:06 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:41:09.394187 | orchestrator | 2025-09-23 07:41:09 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:41:09.395843 | orchestrator | 2025-09-23 07:41:09 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:41:09.395940 | orchestrator | 2025-09-23 07:41:09 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:41:12.432601 | orchestrator | 2025-09-23 07:41:12 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:41:12.433915 | orchestrator | 2025-09-23 07:41:12 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:41:12.433969 | orchestrator | 2025-09-23 07:41:12 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:41:15.475823 | orchestrator | 2025-09-23 07:41:15 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:41:15.478264 | orchestrator | 2025-09-23 07:41:15 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:41:15.478317 | orchestrator | 2025-09-23 07:41:15 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:41:18.519884 | orchestrator | 2025-09-23 07:41:18 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:41:18.520614 | orchestrator | 2025-09-23 07:41:18 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:41:18.520902 | orchestrator | 2025-09-23 07:41:18 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:41:21.569074 | orchestrator | 2025-09-23 07:41:21 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:41:21.570723 | orchestrator | 2025-09-23 07:41:21 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:41:21.570774 | orchestrator | 2025-09-23 07:41:21 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:41:24.617866 | orchestrator | 2025-09-23 07:41:24 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:41:24.620260 | orchestrator | 2025-09-23 07:41:24 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:41:24.620811 | orchestrator | 2025-09-23 07:41:24 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:41:27.663483 | orchestrator | 2025-09-23 07:41:27 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:41:27.664783 | orchestrator | 2025-09-23 07:41:27 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:41:27.664831 | orchestrator | 2025-09-23 07:41:27 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:41:30.708301 | orchestrator | 2025-09-23 07:41:30 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:41:30.709515 | orchestrator | 2025-09-23 07:41:30 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:41:30.709567 | orchestrator | 2025-09-23 07:41:30 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:41:33.764738 | orchestrator | 2025-09-23 07:41:33 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:41:33.764883 | orchestrator | 2025-09-23 07:41:33 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:41:33.764968 | orchestrator | 2025-09-23 07:41:33 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:41:36.814437 | orchestrator | 2025-09-23 07:41:36 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:41:36.815549 | orchestrator | 2025-09-23 07:41:36 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:41:36.816178 | orchestrator | 2025-09-23 07:41:36 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:41:39.867778 | orchestrator | 2025-09-23 07:41:39 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:41:39.867894 | orchestrator | 2025-09-23 07:41:39 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:41:39.868700 | orchestrator | 2025-09-23 07:41:39 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:41:42.901819 | orchestrator | 2025-09-23 07:41:42 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:41:42.901977 | orchestrator | 2025-09-23 07:41:42 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:41:42.901990 | orchestrator | 2025-09-23 07:41:42 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:41:45.958008 | orchestrator | 2025-09-23 07:41:45 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:41:45.963463 | orchestrator | 2025-09-23 07:41:45 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:41:45.964260 | orchestrator | 2025-09-23 07:41:45 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:41:49.013171 | orchestrator | 2025-09-23 07:41:49 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:41:49.013702 | orchestrator | 2025-09-23 07:41:49 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:41:49.013735 | orchestrator | 2025-09-23 07:41:49 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:41:52.074598 | orchestrator | 2025-09-23 07:41:52 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:41:52.075303 | orchestrator | 2025-09-23 07:41:52 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:41:52.075778 | orchestrator | 2025-09-23 07:41:52 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:41:55.118099 | orchestrator | 2025-09-23 07:41:55 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:41:55.118378 | orchestrator | 2025-09-23 07:41:55 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:41:55.118405 | orchestrator | 2025-09-23 07:41:55 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:41:58.156553 | orchestrator | 2025-09-23 07:41:58 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:41:58.158946 | orchestrator | 2025-09-23 07:41:58 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:41:58.159027 | orchestrator | 2025-09-23 07:41:58 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:42:01.202150 | orchestrator | 2025-09-23 07:42:01 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:42:01.203181 | orchestrator | 2025-09-23 07:42:01 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:42:01.203319 | orchestrator | 2025-09-23 07:42:01 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:42:04.244514 | orchestrator | 2025-09-23 07:42:04 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:42:04.244887 | orchestrator | 2025-09-23 07:42:04 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:42:04.244981 | orchestrator | 2025-09-23 07:42:04 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:42:07.279298 | orchestrator | 2025-09-23 07:42:07 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:42:07.279791 | orchestrator | 2025-09-23 07:42:07 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:42:07.279829 | orchestrator | 2025-09-23 07:42:07 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:42:10.325457 | orchestrator | 2025-09-23 07:42:10 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:42:10.326255 | orchestrator | 2025-09-23 07:42:10 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:42:10.326295 | orchestrator | 2025-09-23 07:42:10 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:42:13.378837 | orchestrator | 2025-09-23 07:42:13 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:42:13.379227 | orchestrator | 2025-09-23 07:42:13 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:42:13.379783 | orchestrator | 2025-09-23 07:42:13 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:42:16.412075 | orchestrator | 2025-09-23 07:42:16 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:42:16.412550 | orchestrator | 2025-09-23 07:42:16 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:42:16.412653 | orchestrator | 2025-09-23 07:42:16 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:42:19.459447 | orchestrator | 2025-09-23 07:42:19 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:42:19.462363 | orchestrator | 2025-09-23 07:42:19 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:42:19.462451 | orchestrator | 2025-09-23 07:42:19 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:42:22.513434 | orchestrator | 2025-09-23 07:42:22 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:42:22.515717 | orchestrator | 2025-09-23 07:42:22 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:42:22.515760 | orchestrator | 2025-09-23 07:42:22 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:42:25.563383 | orchestrator | 2025-09-23 07:42:25 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:42:25.565446 | orchestrator | 2025-09-23 07:42:25 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:42:25.565700 | orchestrator | 2025-09-23 07:42:25 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:42:28.619400 | orchestrator | 2025-09-23 07:42:28 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:42:28.622717 | orchestrator | 2025-09-23 07:42:28 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:42:28.622765 | orchestrator | 2025-09-23 07:42:28 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:42:31.670223 | orchestrator | 2025-09-23 07:42:31 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:42:31.671373 | orchestrator | 2025-09-23 07:42:31 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:42:31.672097 | orchestrator | 2025-09-23 07:42:31 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:42:34.721802 | orchestrator | 2025-09-23 07:42:34 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:42:34.722814 | orchestrator | 2025-09-23 07:42:34 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:42:34.722856 | orchestrator | 2025-09-23 07:42:34 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:42:37.769454 | orchestrator | 2025-09-23 07:42:37 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:42:37.771845 | orchestrator | 2025-09-23 07:42:37 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:42:37.772191 | orchestrator | 2025-09-23 07:42:37 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:42:40.814164 | orchestrator | 2025-09-23 07:42:40 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:42:40.814280 | orchestrator | 2025-09-23 07:42:40 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:42:40.814297 | orchestrator | 2025-09-23 07:42:40 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:42:43.862246 | orchestrator | 2025-09-23 07:42:43 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:42:43.863426 | orchestrator | 2025-09-23 07:42:43 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:42:43.863548 | orchestrator | 2025-09-23 07:42:43 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:42:46.909087 | orchestrator | 2025-09-23 07:42:46 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:42:46.910227 | orchestrator | 2025-09-23 07:42:46 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:42:46.910284 | orchestrator | 2025-09-23 07:42:46 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:42:49.950493 | orchestrator | 2025-09-23 07:42:49 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:42:49.952143 | orchestrator | 2025-09-23 07:42:49 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:42:49.952190 | orchestrator | 2025-09-23 07:42:49 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:42:52.987500 | orchestrator | 2025-09-23 07:42:52 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:42:52.990748 | orchestrator | 2025-09-23 07:42:52 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:42:52.990818 | orchestrator | 2025-09-23 07:42:52 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:42:56.036339 | orchestrator | 2025-09-23 07:42:56 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:42:56.040752 | orchestrator | 2025-09-23 07:42:56 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:42:56.040832 | orchestrator | 2025-09-23 07:42:56 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:42:59.081974 | orchestrator | 2025-09-23 07:42:59 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:42:59.082254 | orchestrator | 2025-09-23 07:42:59 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:42:59.082763 | orchestrator | 2025-09-23 07:42:59 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:43:02.128885 | orchestrator | 2025-09-23 07:43:02 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:43:02.129665 | orchestrator | 2025-09-23 07:43:02 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:43:02.129700 | orchestrator | 2025-09-23 07:43:02 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:43:05.180116 | orchestrator | 2025-09-23 07:43:05 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:43:05.183789 | orchestrator | 2025-09-23 07:43:05 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:43:05.183844 | orchestrator | 2025-09-23 07:43:05 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:43:08.223488 | orchestrator | 2025-09-23 07:43:08 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:43:08.228821 | orchestrator | 2025-09-23 07:43:08 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:43:08.228918 | orchestrator | 2025-09-23 07:43:08 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:43:11.274244 | orchestrator | 2025-09-23 07:43:11 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:43:11.274690 | orchestrator | 2025-09-23 07:43:11 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:43:11.274955 | orchestrator | 2025-09-23 07:43:11 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:43:14.336784 | orchestrator | 2025-09-23 07:43:14 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:43:14.337193 | orchestrator | 2025-09-23 07:43:14 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state STARTED 2025-09-23 07:43:14.337225 | orchestrator | 2025-09-23 07:43:14 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:43:17.387602 | orchestrator | 2025-09-23 07:43:17 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:43:17.396996 | orchestrator | 2025-09-23 07:43:17 | INFO  | Task e7f6e375-dd9d-4a1e-b8ed-8bd5ca38c226 is in state SUCCESS 2025-09-23 07:43:17.399792 | orchestrator | 2025-09-23 07:43:17.399836 | orchestrator | 2025-09-23 07:43:17.399849 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-23 07:43:17.399862 | orchestrator | 2025-09-23 07:43:17.399873 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-23 07:43:17.399884 | orchestrator | Tuesday 23 September 2025 07:36:53 +0000 (0:00:00.369) 0:00:00.369 ***** 2025-09-23 07:43:17.399895 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:43:17.399907 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:43:17.399918 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:43:17.399929 | orchestrator | 2025-09-23 07:43:17.399940 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-23 07:43:17.399951 | orchestrator | Tuesday 23 September 2025 07:36:54 +0000 (0:00:00.453) 0:00:00.822 ***** 2025-09-23 07:43:17.399963 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-09-23 07:43:17.399973 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-09-23 07:43:17.399984 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-09-23 07:43:17.399995 | orchestrator | 2025-09-23 07:43:17.400005 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-09-23 07:43:17.400016 | orchestrator | 2025-09-23 07:43:17.400027 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-23 07:43:17.400037 | orchestrator | Tuesday 23 September 2025 07:36:55 +0000 (0:00:00.859) 0:00:01.681 ***** 2025-09-23 07:43:17.400048 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:43:17.400059 | orchestrator | 2025-09-23 07:43:17.400069 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-09-23 07:43:17.400080 | orchestrator | Tuesday 23 September 2025 07:36:55 +0000 (0:00:00.798) 0:00:02.480 ***** 2025-09-23 07:43:17.400090 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:43:17.400101 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:43:17.400112 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:43:17.400122 | orchestrator | 2025-09-23 07:43:17.400133 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-23 07:43:17.400143 | orchestrator | Tuesday 23 September 2025 07:36:56 +0000 (0:00:00.837) 0:00:03.317 ***** 2025-09-23 07:43:17.400154 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:43:17.400164 | orchestrator | 2025-09-23 07:43:17.400175 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-09-23 07:43:17.400186 | orchestrator | Tuesday 23 September 2025 07:36:57 +0000 (0:00:01.054) 0:00:04.372 ***** 2025-09-23 07:43:17.400196 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:43:17.400207 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:43:17.400217 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:43:17.400228 | orchestrator | 2025-09-23 07:43:17.400238 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-09-23 07:43:17.400374 | orchestrator | Tuesday 23 September 2025 07:36:58 +0000 (0:00:00.716) 0:00:05.088 ***** 2025-09-23 07:43:17.400386 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-23 07:43:17.400399 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-23 07:43:17.400412 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-23 07:43:17.400424 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-23 07:43:17.400436 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-23 07:43:17.400504 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-23 07:43:17.400517 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-23 07:43:17.400531 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-23 07:43:17.400543 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-23 07:43:17.400556 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-23 07:43:17.400568 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-23 07:43:17.400580 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-23 07:43:17.400593 | orchestrator | 2025-09-23 07:43:17.400618 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-23 07:43:17.400631 | orchestrator | Tuesday 23 September 2025 07:37:02 +0000 (0:00:03.952) 0:00:09.040 ***** 2025-09-23 07:43:17.400644 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-23 07:43:17.400657 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-23 07:43:17.400668 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-23 07:43:17.400679 | orchestrator | 2025-09-23 07:43:17.400689 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-23 07:43:17.400700 | orchestrator | Tuesday 23 September 2025 07:37:03 +0000 (0:00:01.004) 0:00:10.045 ***** 2025-09-23 07:43:17.400711 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-23 07:43:17.400722 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-23 07:43:17.400733 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-23 07:43:17.400743 | orchestrator | 2025-09-23 07:43:17.400754 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-23 07:43:17.400764 | orchestrator | Tuesday 23 September 2025 07:37:04 +0000 (0:00:01.405) 0:00:11.451 ***** 2025-09-23 07:43:17.400775 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-09-23 07:43:17.400786 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.400811 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-09-23 07:43:17.400823 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.400833 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-09-23 07:43:17.400844 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.400855 | orchestrator | 2025-09-23 07:43:17.400865 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-09-23 07:43:17.400876 | orchestrator | Tuesday 23 September 2025 07:37:05 +0000 (0:00:00.763) 0:00:12.214 ***** 2025-09-23 07:43:17.400891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-23 07:43:17.400956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-23 07:43:17.400979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-23 07:43:17.401124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-23 07:43:17.401146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-23 07:43:17.401168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-23 07:43:17.401180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-23 07:43:17.401192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-23 07:43:17.401203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-23 07:43:17.401223 | orchestrator | 2025-09-23 07:43:17.401234 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-09-23 07:43:17.401274 | orchestrator | Tuesday 23 September 2025 07:37:08 +0000 (0:00:02.508) 0:00:14.723 ***** 2025-09-23 07:43:17.401286 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:43:17.401297 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:43:17.401307 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:43:17.401318 | orchestrator | 2025-09-23 07:43:17.401329 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-09-23 07:43:17.401339 | orchestrator | Tuesday 23 September 2025 07:37:09 +0000 (0:00:01.331) 0:00:16.055 ***** 2025-09-23 07:43:17.401350 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-09-23 07:43:17.401361 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-09-23 07:43:17.401372 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-09-23 07:43:17.401382 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-09-23 07:43:17.401393 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-09-23 07:43:17.401404 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-09-23 07:43:17.401414 | orchestrator | 2025-09-23 07:43:17.401425 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-09-23 07:43:17.401436 | orchestrator | Tuesday 23 September 2025 07:37:12 +0000 (0:00:03.310) 0:00:19.365 ***** 2025-09-23 07:43:17.401446 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:43:17.401457 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:43:17.401467 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:43:17.401478 | orchestrator | 2025-09-23 07:43:17.401506 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-09-23 07:43:17.401517 | orchestrator | Tuesday 23 September 2025 07:37:13 +0000 (0:00:01.064) 0:00:20.429 ***** 2025-09-23 07:43:17.401528 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:43:17.401538 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:43:17.401549 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:43:17.401570 | orchestrator | 2025-09-23 07:43:17.401581 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-09-23 07:43:17.401592 | orchestrator | Tuesday 23 September 2025 07:37:15 +0000 (0:00:01.602) 0:00:22.032 ***** 2025-09-23 07:43:17.401609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-23 07:43:17.401642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-23 07:43:17.401655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-23 07:43:17.401676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__30ca0bacd667ee777b7a0f5b609253696f65feb3', '__omit_place_holder__30ca0bacd667ee777b7a0f5b609253696f65feb3'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-23 07:43:17.401687 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.401699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-23 07:43:17.401710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-23 07:43:17.401727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-23 07:43:17.401739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__30ca0bacd667ee777b7a0f5b609253696f65feb3', '__omit_place_holder__30ca0bacd667ee777b7a0f5b609253696f65feb3'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-23 07:43:17.401750 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.401770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-23 07:43:17.401790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-23 07:43:17.401801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-23 07:43:17.401813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__30ca0bacd667ee777b7a0f5b609253696f65feb3', '__omit_place_holder__30ca0bacd667ee777b7a0f5b609253696f65feb3'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-23 07:43:17.401824 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.401835 | orchestrator | 2025-09-23 07:43:17.401846 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-09-23 07:43:17.401857 | orchestrator | Tuesday 23 September 2025 07:37:16 +0000 (0:00:01.320) 0:00:23.352 ***** 2025-09-23 07:43:17.401868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-23 07:43:17.401884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-23 07:43:17.401904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-23 07:43:17.401923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-23 07:43:17.401934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-23 07:43:17.401945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__30ca0bacd667ee777b7a0f5b609253696f65feb3', '__omit_place_holder__30ca0bacd667ee777b7a0f5b609253696f65feb3'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-23 07:43:17.401957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-23 07:43:17.401972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-23 07:43:17.401984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__30ca0bacd667ee777b7a0f5b609253696f65feb3', '__omit_place_holder__30ca0bacd667ee777b7a0f5b609253696f65feb3'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-23 07:43:17.402273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-23 07:43:17.402293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-23 07:43:17.402304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__30ca0bacd667ee777b7a0f5b609253696f65feb3', '__omit_place_holder__30ca0bacd667ee777b7a0f5b609253696f65feb3'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-23 07:43:17.402315 | orchestrator | 2025-09-23 07:43:17.402326 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-09-23 07:43:17.402337 | orchestrator | Tuesday 23 September 2025 07:37:19 +0000 (0:00:02.986) 0:00:26.338 ***** 2025-09-23 07:43:17.402348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-23 07:43:17.402359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-23 07:43:17.402377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-23 07:43:17.402404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-23 07:43:17.402416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-23 07:43:17.402427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-23 07:43:17.402439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-23 07:43:17.402450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-23 07:43:17.402461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-23 07:43:17.402472 | orchestrator | 2025-09-23 07:43:17.402537 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-09-23 07:43:17.402550 | orchestrator | Tuesday 23 September 2025 07:37:23 +0000 (0:00:03.512) 0:00:29.850 ***** 2025-09-23 07:43:17.402569 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-23 07:43:17.402580 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-23 07:43:17.402591 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-23 07:43:17.402602 | orchestrator | 2025-09-23 07:43:17.402613 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-09-23 07:43:17.402624 | orchestrator | Tuesday 23 September 2025 07:37:25 +0000 (0:00:02.213) 0:00:32.064 ***** 2025-09-23 07:43:17.402635 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-23 07:43:17.402645 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-23 07:43:17.402656 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-23 07:43:17.402667 | orchestrator | 2025-09-23 07:43:17.403702 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-09-23 07:43:17.403797 | orchestrator | Tuesday 23 September 2025 07:37:32 +0000 (0:00:06.800) 0:00:38.864 ***** 2025-09-23 07:43:17.403813 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.403825 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.403836 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.403846 | orchestrator | 2025-09-23 07:43:17.403858 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-09-23 07:43:17.403869 | orchestrator | Tuesday 23 September 2025 07:37:33 +0000 (0:00:00.891) 0:00:39.756 ***** 2025-09-23 07:43:17.403907 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-23 07:43:17.403941 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-23 07:43:17.403959 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-23 07:43:17.403978 | orchestrator | 2025-09-23 07:43:17.403996 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-09-23 07:43:17.404013 | orchestrator | Tuesday 23 September 2025 07:37:36 +0000 (0:00:03.235) 0:00:42.992 ***** 2025-09-23 07:43:17.404031 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-23 07:43:17.404048 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-23 07:43:17.404066 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-23 07:43:17.404082 | orchestrator | 2025-09-23 07:43:17.404100 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-09-23 07:43:17.404119 | orchestrator | Tuesday 23 September 2025 07:37:39 +0000 (0:00:02.932) 0:00:45.925 ***** 2025-09-23 07:43:17.404136 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-09-23 07:43:17.404153 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-09-23 07:43:17.404170 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-09-23 07:43:17.404185 | orchestrator | 2025-09-23 07:43:17.404201 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-09-23 07:43:17.404218 | orchestrator | Tuesday 23 September 2025 07:37:41 +0000 (0:00:02.320) 0:00:48.246 ***** 2025-09-23 07:43:17.404237 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-09-23 07:43:17.404257 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-09-23 07:43:17.404277 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-09-23 07:43:17.404332 | orchestrator | 2025-09-23 07:43:17.404353 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-23 07:43:17.404371 | orchestrator | Tuesday 23 September 2025 07:37:44 +0000 (0:00:02.611) 0:00:50.857 ***** 2025-09-23 07:43:17.404383 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:43:17.404397 | orchestrator | 2025-09-23 07:43:17.404409 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-09-23 07:43:17.404422 | orchestrator | Tuesday 23 September 2025 07:37:45 +0000 (0:00:01.582) 0:00:52.440 ***** 2025-09-23 07:43:17.404443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-23 07:43:17.404514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-23 07:43:17.404567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-23 07:43:17.404590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-23 07:43:17.404612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-23 07:43:17.404635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-23 07:43:17.404673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-23 07:43:17.404707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-23 07:43:17.404728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-23 07:43:17.404741 | orchestrator | 2025-09-23 07:43:17.404754 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-09-23 07:43:17.404766 | orchestrator | Tuesday 23 September 2025 07:37:50 +0000 (0:00:04.272) 0:00:56.713 ***** 2025-09-23 07:43:17.404789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-23 07:43:17.404802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-23 07:43:17.404814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-23 07:43:17.404835 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.404848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-23 07:43:17.404861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-23 07:43:17.404878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-23 07:43:17.404892 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.404904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-23 07:43:17.404925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-23 07:43:17.404938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-23 07:43:17.404951 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.404971 | orchestrator | 2025-09-23 07:43:17.404982 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-09-23 07:43:17.404995 | orchestrator | Tuesday 23 September 2025 07:37:51 +0000 (0:00:01.851) 0:00:58.564 ***** 2025-09-23 07:43:17.405007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-23 07:43:17.405019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-23 07:43:17.405031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-23 07:43:17.405044 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.405060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-23 07:43:17.405089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-23 07:43:17.405107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-23 07:43:17.405126 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.405156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-23 07:43:17.405179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-23 07:43:17.405199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-23 07:43:17.405220 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.405233 | orchestrator | 2025-09-23 07:43:17.405244 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-23 07:43:17.405256 | orchestrator | Tuesday 23 September 2025 07:37:53 +0000 (0:00:01.364) 0:00:59.928 ***** 2025-09-23 07:43:17.405274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-23 07:43:17.405298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-23 07:43:17.405311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-23 07:43:17.405323 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.405342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-23 07:43:17.405354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-23 07:43:17.405366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-23 07:43:17.405378 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.405391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-23 07:43:17.405403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-23 07:43:17.405421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-23 07:43:17.405433 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.405445 | orchestrator | 2025-09-23 07:43:17.405456 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-23 07:43:17.405467 | orchestrator | Tuesday 23 September 2025 07:37:54 +0000 (0:00:00.743) 0:01:00.672 ***** 2025-09-23 07:43:17.405547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-23 07:43:17.405605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-23 07:43:17.405629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-23 07:43:17.405648 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.405667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-23 07:43:17.405698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-23 07:43:17.405719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-23 07:43:17.405739 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.405770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-23 07:43:17.405808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-23 07:43:17.405831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-23 07:43:17.405843 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.405854 | orchestrator | 2025-09-23 07:43:17.405866 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-23 07:43:17.405878 | orchestrator | Tuesday 23 September 2025 07:37:54 +0000 (0:00:00.555) 0:01:01.227 ***** 2025-09-23 07:43:17.405889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-23 07:43:17.405901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-23 07:43:17.405918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-23 07:43:17.405929 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.405949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-23 07:43:17.405968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-23 07:43:17.405980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-23 07:43:17.405991 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.406002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-23 07:43:17.406013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-23 07:43:17.406096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-23 07:43:17.406108 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.406119 | orchestrator | 2025-09-23 07:43:17.406136 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-09-23 07:43:17.406148 | orchestrator | Tuesday 23 September 2025 07:37:55 +0000 (0:00:00.889) 0:01:02.117 ***** 2025-09-23 07:43:17.406159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-23 07:43:17.406187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-23 07:43:17.406200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-23 07:43:17.406211 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.406222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-23 07:43:17.406234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-23 07:43:17.406246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-23 07:43:17.406257 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.406273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-23 07:43:17.406299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-23 07:43:17.406312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-23 07:43:17.406323 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.406334 | orchestrator | 2025-09-23 07:43:17.406345 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-09-23 07:43:17.406356 | orchestrator | Tuesday 23 September 2025 07:37:56 +0000 (0:00:00.980) 0:01:03.097 ***** 2025-09-23 07:43:17.406368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-23 07:43:17.406380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-23 07:43:17.406391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-23 07:43:17.406402 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.406419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-23 07:43:17.406437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-23 07:43:17.406466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-23 07:43:17.406479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-23 07:43:17.406681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-23 07:43:17.406730 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.406743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-23 07:43:17.406753 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.406763 | orchestrator | 2025-09-23 07:43:17.406773 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-09-23 07:43:17.406784 | orchestrator | Tuesday 23 September 2025 07:37:57 +0000 (0:00:01.009) 0:01:04.107 ***** 2025-09-23 07:43:17.406794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-23 07:43:17.406825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-23 07:43:17.406836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-23 07:43:17.406846 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.406869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-23 07:43:17.406880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-23 07:43:17.406890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-23 07:43:17.406900 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.406910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-23 07:43:17.406927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-23 07:43:17.406942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-23 07:43:17.406952 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.406962 | orchestrator | 2025-09-23 07:43:17.406972 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-09-23 07:43:17.406981 | orchestrator | Tuesday 23 September 2025 07:37:58 +0000 (0:00:01.073) 0:01:05.180 ***** 2025-09-23 07:43:17.406991 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-23 07:43:17.407001 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-23 07:43:17.407017 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-23 07:43:17.407028 | orchestrator | 2025-09-23 07:43:17.407037 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-09-23 07:43:17.407047 | orchestrator | Tuesday 23 September 2025 07:38:01 +0000 (0:00:02.849) 0:01:08.029 ***** 2025-09-23 07:43:17.407057 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-23 07:43:17.407067 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-23 07:43:17.407076 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-23 07:43:17.407086 | orchestrator | 2025-09-23 07:43:17.407095 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-09-23 07:43:17.407105 | orchestrator | Tuesday 23 September 2025 07:38:03 +0000 (0:00:01.975) 0:01:10.005 ***** 2025-09-23 07:43:17.407115 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-23 07:43:17.407125 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-23 07:43:17.407134 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-23 07:43:17.407144 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-23 07:43:17.407154 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.407163 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-23 07:43:17.407173 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.407183 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-23 07:43:17.407193 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.407202 | orchestrator | 2025-09-23 07:43:17.407212 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-09-23 07:43:17.407228 | orchestrator | Tuesday 23 September 2025 07:38:04 +0000 (0:00:01.270) 0:01:11.275 ***** 2025-09-23 07:43:17.407238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-23 07:43:17.407249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-23 07:43:17.407267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-23 07:43:17.407286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-23 07:43:17.407296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-23 07:43:17.407307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-23 07:43:17.407323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-23 07:43:17.407333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-23 07:43:17.407343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-23 07:43:17.407353 | orchestrator | 2025-09-23 07:43:17.407363 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-09-23 07:43:17.407373 | orchestrator | Tuesday 23 September 2025 07:38:07 +0000 (0:00:03.141) 0:01:14.417 ***** 2025-09-23 07:43:17.407382 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:43:17.407392 | orchestrator | 2025-09-23 07:43:17.407402 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-09-23 07:43:17.407415 | orchestrator | Tuesday 23 September 2025 07:38:08 +0000 (0:00:00.731) 0:01:15.148 ***** 2025-09-23 07:43:17.407427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-23 07:43:17.407444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-23 07:43:17.407455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.407471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.407481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-23 07:43:17.407519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-23 07:43:17.407535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.407551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.407561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-23 07:43:17.407578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-23 07:43:17.407587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.407597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.407607 | orchestrator | 2025-09-23 07:43:17.407617 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-09-23 07:43:17.407626 | orchestrator | Tuesday 23 September 2025 07:38:12 +0000 (0:00:04.434) 0:01:19.583 ***** 2025-09-23 07:43:17.407640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-23 07:43:17.407658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-23 07:43:17.407669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.407685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-23 07:43:17.407695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.407705 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.407715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-23 07:43:17.407729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.407739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.407749 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.407765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-23 07:43:17.407783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-23 07:43:17.407793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.407803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.407812 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.407822 | orchestrator | 2025-09-23 07:43:17.407832 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-09-23 07:43:17.407842 | orchestrator | Tuesday 23 September 2025 07:38:13 +0000 (0:00:00.955) 0:01:20.538 ***** 2025-09-23 07:43:17.407852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-23 07:43:17.407863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-23 07:43:17.407879 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.407889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-23 07:43:17.407899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-23 07:43:17.407909 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.407919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-23 07:43:17.407929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-23 07:43:17.407944 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.407954 | orchestrator | 2025-09-23 07:43:17.407968 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-09-23 07:43:17.407978 | orchestrator | Tuesday 23 September 2025 07:38:15 +0000 (0:00:01.233) 0:01:21.772 ***** 2025-09-23 07:43:17.407988 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:43:17.407997 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:43:17.408007 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:43:17.408016 | orchestrator | 2025-09-23 07:43:17.408026 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-09-23 07:43:17.408036 | orchestrator | Tuesday 23 September 2025 07:38:16 +0000 (0:00:01.360) 0:01:23.132 ***** 2025-09-23 07:43:17.408046 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:43:17.408055 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:43:17.408065 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:43:17.408074 | orchestrator | 2025-09-23 07:43:17.408083 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-09-23 07:43:17.408093 | orchestrator | Tuesday 23 September 2025 07:38:18 +0000 (0:00:02.424) 0:01:25.557 ***** 2025-09-23 07:43:17.408103 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:43:17.408112 | orchestrator | 2025-09-23 07:43:17.408122 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-09-23 07:43:17.408131 | orchestrator | Tuesday 23 September 2025 07:38:19 +0000 (0:00:00.848) 0:01:26.405 ***** 2025-09-23 07:43:17.408142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-23 07:43:17.408153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.408163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.408177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-23 07:43:17.408200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.408211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.408221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-23 07:43:17.408232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.408246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.408270 | orchestrator | 2025-09-23 07:43:17.408287 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-09-23 07:43:17.408300 | orchestrator | Tuesday 23 September 2025 07:38:22 +0000 (0:00:03.131) 0:01:29.537 ***** 2025-09-23 07:43:17.408333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-23 07:43:17.408354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.408371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.408387 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.408402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-23 07:43:17.408424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.408453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.408468 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.408517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-23 07:43:17.408536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.408552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.408568 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.408585 | orchestrator | 2025-09-23 07:43:17.408600 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-09-23 07:43:17.408610 | orchestrator | Tuesday 23 September 2025 07:38:23 +0000 (0:00:00.548) 0:01:30.086 ***** 2025-09-23 07:43:17.408619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-23 07:43:17.408630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-23 07:43:17.408648 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.408658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-23 07:43:17.408667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-23 07:43:17.408677 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.408692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-23 07:43:17.408702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-23 07:43:17.408712 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.408721 | orchestrator | 2025-09-23 07:43:17.408731 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-09-23 07:43:17.408740 | orchestrator | Tuesday 23 September 2025 07:38:24 +0000 (0:00:00.990) 0:01:31.076 ***** 2025-09-23 07:43:17.408750 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:43:17.408759 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:43:17.408769 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:43:17.408778 | orchestrator | 2025-09-23 07:43:17.408788 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-09-23 07:43:17.408797 | orchestrator | Tuesday 23 September 2025 07:38:25 +0000 (0:00:01.316) 0:01:32.393 ***** 2025-09-23 07:43:17.408807 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:43:17.408817 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:43:17.408826 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:43:17.408836 | orchestrator | 2025-09-23 07:43:17.408852 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-09-23 07:43:17.408862 | orchestrator | Tuesday 23 September 2025 07:38:27 +0000 (0:00:01.896) 0:01:34.289 ***** 2025-09-23 07:43:17.408871 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.408881 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.408890 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.408899 | orchestrator | 2025-09-23 07:43:17.408909 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-09-23 07:43:17.408919 | orchestrator | Tuesday 23 September 2025 07:38:27 +0000 (0:00:00.265) 0:01:34.554 ***** 2025-09-23 07:43:17.408928 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:43:17.408938 | orchestrator | 2025-09-23 07:43:17.408947 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-09-23 07:43:17.408957 | orchestrator | Tuesday 23 September 2025 07:38:28 +0000 (0:00:00.725) 0:01:35.280 ***** 2025-09-23 07:43:17.408967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-23 07:43:17.408985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-23 07:43:17.408995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-23 07:43:17.409005 | orchestrator | 2025-09-23 07:43:17.409020 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-09-23 07:43:17.409030 | orchestrator | Tuesday 23 September 2025 07:38:31 +0000 (0:00:02.390) 0:01:37.670 ***** 2025-09-23 07:43:17.409046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-23 07:43:17.409057 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.409067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-23 07:43:17.409077 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.409087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-23 07:43:17.409103 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.409112 | orchestrator | 2025-09-23 07:43:17.409122 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-09-23 07:43:17.409132 | orchestrator | Tuesday 23 September 2025 07:38:32 +0000 (0:00:01.508) 0:01:39.179 ***** 2025-09-23 07:43:17.409176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-23 07:43:17.409196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-23 07:43:17.409213 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.409230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-23 07:43:17.409246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-23 07:43:17.409261 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.409278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-23 07:43:17.409289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-23 07:43:17.409299 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.409308 | orchestrator | 2025-09-23 07:43:17.409318 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-09-23 07:43:17.409328 | orchestrator | Tuesday 23 September 2025 07:38:34 +0000 (0:00:01.761) 0:01:40.941 ***** 2025-09-23 07:43:17.409345 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.409354 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.409364 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.409374 | orchestrator | 2025-09-23 07:43:17.409383 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-09-23 07:43:17.409393 | orchestrator | Tuesday 23 September 2025 07:38:35 +0000 (0:00:00.740) 0:01:41.682 ***** 2025-09-23 07:43:17.409402 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.409412 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.409421 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.409431 | orchestrator | 2025-09-23 07:43:17.409440 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-09-23 07:43:17.409450 | orchestrator | Tuesday 23 September 2025 07:38:36 +0000 (0:00:01.119) 0:01:42.802 ***** 2025-09-23 07:43:17.409460 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:43:17.409470 | orchestrator | 2025-09-23 07:43:17.409479 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-09-23 07:43:17.409517 | orchestrator | Tuesday 23 September 2025 07:38:36 +0000 (0:00:00.673) 0:01:43.475 ***** 2025-09-23 07:43:17.409548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-23 07:43:17.409564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-23 07:43:17.409575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.409593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.409611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.409622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.409632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.409648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.409664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-23 07:43:17.409681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.409691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.409701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.409711 | orchestrator | 2025-09-23 07:43:17.409721 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-09-23 07:43:17.409730 | orchestrator | Tuesday 23 September 2025 07:38:40 +0000 (0:00:03.332) 0:01:46.808 ***** 2025-09-23 07:43:17.409745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-23 07:43:17.409757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.409786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.409798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.409809 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.409820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-23 07:43:17.409831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.409847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.409872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.409883 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.409894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-23 07:43:17.409905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.409917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.409932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.409950 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.409961 | orchestrator | 2025-09-23 07:43:17.409972 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-09-23 07:43:17.409983 | orchestrator | Tuesday 23 September 2025 07:38:41 +0000 (0:00:00.897) 0:01:47.706 ***** 2025-09-23 07:43:17.409994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-23 07:43:17.410011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-23 07:43:17.410078 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.410090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-23 07:43:17.410102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-23 07:43:17.410113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-23 07:43:17.410124 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.410135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-23 07:43:17.410146 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.410157 | orchestrator | 2025-09-23 07:43:17.410168 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-09-23 07:43:17.410178 | orchestrator | Tuesday 23 September 2025 07:38:42 +0000 (0:00:01.340) 0:01:49.046 ***** 2025-09-23 07:43:17.410189 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:43:17.410200 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:43:17.410211 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:43:17.410221 | orchestrator | 2025-09-23 07:43:17.410232 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-09-23 07:43:17.410242 | orchestrator | Tuesday 23 September 2025 07:38:43 +0000 (0:00:01.496) 0:01:50.543 ***** 2025-09-23 07:43:17.410253 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:43:17.410264 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:43:17.410274 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:43:17.410285 | orchestrator | 2025-09-23 07:43:17.410296 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-09-23 07:43:17.410306 | orchestrator | Tuesday 23 September 2025 07:38:46 +0000 (0:00:02.534) 0:01:53.077 ***** 2025-09-23 07:43:17.410317 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.410328 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.410338 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.410349 | orchestrator | 2025-09-23 07:43:17.410360 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-09-23 07:43:17.410371 | orchestrator | Tuesday 23 September 2025 07:38:47 +0000 (0:00:00.538) 0:01:53.615 ***** 2025-09-23 07:43:17.410381 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.410392 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.410403 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.410413 | orchestrator | 2025-09-23 07:43:17.410424 | orchestrator | TASK [include_role : designate] ************************************************ 2025-09-23 07:43:17.410434 | orchestrator | Tuesday 23 September 2025 07:38:47 +0000 (0:00:00.314) 0:01:53.930 ***** 2025-09-23 07:43:17.410452 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:43:17.410463 | orchestrator | 2025-09-23 07:43:17.410474 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-09-23 07:43:17.410536 | orchestrator | Tuesday 23 September 2025 07:38:48 +0000 (0:00:00.799) 0:01:54.730 ***** 2025-09-23 07:43:17.410555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-23 07:43:17.410574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-23 07:43:17.410586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.410598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.410609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.410621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.410640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.410656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-23 07:43:17.410674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-23 07:43:17.410685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.410696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.410707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.410726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.410742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.410760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-23 07:43:17.410771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-23 07:43:17.410782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.410793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.410814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.410825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.410840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.410850 | orchestrator | 2025-09-23 07:43:17.410859 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-09-23 07:43:17.410869 | orchestrator | Tuesday 23 September 2025 07:38:52 +0000 (0:00:03.907) 0:01:58.637 ***** 2025-09-23 07:43:17.410885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-23 07:43:17.410896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-23 07:43:17.410905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.410922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.410932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.410946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.410962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.410972 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.410982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-23 07:43:17.410992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-23 07:43:17.411014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.411024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.411038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.411054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.411065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.411074 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.411084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-23 07:43:17.411101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-23 07:43:17.411111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.411124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.411135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.411150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.411161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.411177 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.411186 | orchestrator | 2025-09-23 07:43:17.411196 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-09-23 07:43:17.411206 | orchestrator | Tuesday 23 September 2025 07:38:52 +0000 (0:00:00.829) 0:01:59.467 ***** 2025-09-23 07:43:17.411216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-23 07:43:17.411226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-23 07:43:17.411236 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.411246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-23 07:43:17.411255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-23 07:43:17.411265 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.411274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-23 07:43:17.411284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-23 07:43:17.411293 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.411303 | orchestrator | 2025-09-23 07:43:17.411313 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-09-23 07:43:17.411322 | orchestrator | Tuesday 23 September 2025 07:38:53 +0000 (0:00:00.862) 0:02:00.329 ***** 2025-09-23 07:43:17.411332 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:43:17.411341 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:43:17.411350 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:43:17.411360 | orchestrator | 2025-09-23 07:43:17.411369 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-09-23 07:43:17.411379 | orchestrator | Tuesday 23 September 2025 07:38:55 +0000 (0:00:01.289) 0:02:01.619 ***** 2025-09-23 07:43:17.411388 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:43:17.411397 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:43:17.411407 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:43:17.411416 | orchestrator | 2025-09-23 07:43:17.411430 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-09-23 07:43:17.411439 | orchestrator | Tuesday 23 September 2025 07:38:56 +0000 (0:00:01.884) 0:02:03.504 ***** 2025-09-23 07:43:17.411449 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.411458 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.411468 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.411478 | orchestrator | 2025-09-23 07:43:17.411506 | orchestrator | TASK [include_role : glance] *************************************************** 2025-09-23 07:43:17.411516 | orchestrator | Tuesday 23 September 2025 07:38:57 +0000 (0:00:00.416) 0:02:03.920 ***** 2025-09-23 07:43:17.411526 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:43:17.411535 | orchestrator | 2025-09-23 07:43:17.411544 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-09-23 07:43:17.411554 | orchestrator | Tuesday 23 September 2025 07:38:58 +0000 (0:00:00.726) 0:02:04.646 ***** 2025-09-23 07:43:17.411574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-23 07:43:17.411594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-23 07:43:17.411616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-23 07:43:17.411635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-23 07:43:17.411674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-23 07:43:17.411692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-23 07:43:17.411703 | orchestrator | 2025-09-23 07:43:17.411713 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-09-23 07:43:17.411722 | orchestrator | Tuesday 23 September 2025 07:39:02 +0000 (0:00:03.992) 0:02:08.639 ***** 2025-09-23 07:43:17.411743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-23 07:43:17.411761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-23 07:43:17.411772 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.411787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-23 07:43:17.411810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-23 07:43:17.411821 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.411840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-23 07:43:17.411858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-23 07:43:17.411874 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.411884 | orchestrator | 2025-09-23 07:43:17.411894 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-09-23 07:43:17.411903 | orchestrator | Tuesday 23 September 2025 07:39:04 +0000 (0:00:02.896) 0:02:11.535 ***** 2025-09-23 07:43:17.411914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-23 07:43:17.411925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-23 07:43:17.411934 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.411944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-23 07:43:17.411958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-23 07:43:17.411975 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.411985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-23 07:43:17.412001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-23 07:43:17.412012 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.412021 | orchestrator | 2025-09-23 07:43:17.412031 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-09-23 07:43:17.412041 | orchestrator | Tuesday 23 September 2025 07:39:07 +0000 (0:00:03.063) 0:02:14.599 ***** 2025-09-23 07:43:17.412050 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:43:17.412060 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:43:17.412069 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:43:17.412079 | orchestrator | 2025-09-23 07:43:17.412088 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-09-23 07:43:17.412098 | orchestrator | Tuesday 23 September 2025 07:39:09 +0000 (0:00:01.378) 0:02:15.977 ***** 2025-09-23 07:43:17.412107 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:43:17.412117 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:43:17.412126 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:43:17.412136 | orchestrator | 2025-09-23 07:43:17.412145 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-09-23 07:43:17.412154 | orchestrator | Tuesday 23 September 2025 07:39:11 +0000 (0:00:01.961) 0:02:17.939 ***** 2025-09-23 07:43:17.412164 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.412173 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.412182 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.412192 | orchestrator | 2025-09-23 07:43:17.412201 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-09-23 07:43:17.412211 | orchestrator | Tuesday 23 September 2025 07:39:11 +0000 (0:00:00.438) 0:02:18.377 ***** 2025-09-23 07:43:17.412220 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:43:17.412230 | orchestrator | 2025-09-23 07:43:17.412239 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-09-23 07:43:17.412249 | orchestrator | Tuesday 23 September 2025 07:39:12 +0000 (0:00:00.831) 0:02:19.209 ***** 2025-09-23 07:43:17.412259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-23 07:43:17.412276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-23 07:43:17.412290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-23 07:43:17.412300 | orchestrator | 2025-09-23 07:43:17.412310 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-09-23 07:43:17.412319 | orchestrator | Tuesday 23 September 2025 07:39:15 +0000 (0:00:03.224) 0:02:22.433 ***** 2025-09-23 07:43:17.412336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-23 07:43:17.412346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-23 07:43:17.412356 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.412365 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.412375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-23 07:43:17.412385 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.412394 | orchestrator | 2025-09-23 07:43:17.412404 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-09-23 07:43:17.412419 | orchestrator | Tuesday 23 September 2025 07:39:16 +0000 (0:00:00.665) 0:02:23.098 ***** 2025-09-23 07:43:17.412429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-23 07:43:17.412438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-23 07:43:17.412448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-23 07:43:17.412458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-23 07:43:17.412467 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.412477 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.412527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-23 07:43:17.412542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-23 07:43:17.412553 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.412562 | orchestrator | 2025-09-23 07:43:17.412572 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-09-23 07:43:17.412582 | orchestrator | Tuesday 23 September 2025 07:39:17 +0000 (0:00:00.736) 0:02:23.835 ***** 2025-09-23 07:43:17.412592 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:43:17.412601 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:43:17.412611 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:43:17.412620 | orchestrator | 2025-09-23 07:43:17.412630 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-09-23 07:43:17.412640 | orchestrator | Tuesday 23 September 2025 07:39:18 +0000 (0:00:01.332) 0:02:25.167 ***** 2025-09-23 07:43:17.412649 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:43:17.412659 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:43:17.412669 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:43:17.412676 | orchestrator | 2025-09-23 07:43:17.412684 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-09-23 07:43:17.412692 | orchestrator | Tuesday 23 September 2025 07:39:20 +0000 (0:00:02.030) 0:02:27.198 ***** 2025-09-23 07:43:17.412700 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.412708 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.412721 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.412729 | orchestrator | 2025-09-23 07:43:17.412737 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-09-23 07:43:17.412745 | orchestrator | Tuesday 23 September 2025 07:39:21 +0000 (0:00:00.422) 0:02:27.620 ***** 2025-09-23 07:43:17.412752 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:43:17.412760 | orchestrator | 2025-09-23 07:43:17.412768 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-09-23 07:43:17.412776 | orchestrator | Tuesday 23 September 2025 07:39:21 +0000 (0:00:00.840) 0:02:28.461 ***** 2025-09-23 07:43:17.412785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-23 07:43:17.412806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-23 07:43:17.412830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-23 07:43:17.412846 | orchestrator | 2025-09-23 07:43:17.412854 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-09-23 07:43:17.412862 | orchestrator | Tuesday 23 September 2025 07:39:25 +0000 (0:00:03.759) 0:02:32.221 ***** 2025-09-23 07:43:17.412880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-23 07:43:17.412894 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.412907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-23 07:43:17.412916 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.412930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-23 07:43:17.412944 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.412952 | orchestrator | 2025-09-23 07:43:17.412960 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-09-23 07:43:17.412967 | orchestrator | Tuesday 23 September 2025 07:39:26 +0000 (0:00:01.081) 0:02:33.302 ***** 2025-09-23 07:43:17.412976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-23 07:43:17.412984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-23 07:43:17.412992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-23 07:43:17.413001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-23 07:43:17.413009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-23 07:43:17.413017 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.413031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-23 07:43:17.413039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-23 07:43:17.413047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-23 07:43:17.413060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-23 07:43:17.413074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-23 07:43:17.413082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-23 07:43:17.413090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-23 07:43:17.413097 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.413105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-23 07:43:17.413114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-23 07:43:17.413122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-23 07:43:17.413130 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.413137 | orchestrator | 2025-09-23 07:43:17.413145 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-09-23 07:43:17.413153 | orchestrator | Tuesday 23 September 2025 07:39:27 +0000 (0:00:00.893) 0:02:34.195 ***** 2025-09-23 07:43:17.413161 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:43:17.413169 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:43:17.413176 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:43:17.413184 | orchestrator | 2025-09-23 07:43:17.413192 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-09-23 07:43:17.413200 | orchestrator | Tuesday 23 September 2025 07:39:29 +0000 (0:00:01.445) 0:02:35.641 ***** 2025-09-23 07:43:17.413207 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:43:17.413215 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:43:17.413223 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:43:17.413231 | orchestrator | 2025-09-23 07:43:17.413238 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-09-23 07:43:17.413246 | orchestrator | Tuesday 23 September 2025 07:39:30 +0000 (0:00:01.768) 0:02:37.410 ***** 2025-09-23 07:43:17.413254 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.413261 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.413269 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.413277 | orchestrator | 2025-09-23 07:43:17.413285 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-09-23 07:43:17.413292 | orchestrator | Tuesday 23 September 2025 07:39:31 +0000 (0:00:00.228) 0:02:37.638 ***** 2025-09-23 07:43:17.413300 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.413308 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.413316 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.413323 | orchestrator | 2025-09-23 07:43:17.413334 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-09-23 07:43:17.413347 | orchestrator | Tuesday 23 September 2025 07:39:31 +0000 (0:00:00.468) 0:02:38.106 ***** 2025-09-23 07:43:17.413355 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:43:17.413363 | orchestrator | 2025-09-23 07:43:17.413370 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-09-23 07:43:17.413378 | orchestrator | Tuesday 23 September 2025 07:39:32 +0000 (0:00:00.932) 0:02:39.039 ***** 2025-09-23 07:43:17.413402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-23 07:43:17.413413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-23 07:43:17.413422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-23 07:43:17.413430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-23 07:43:17.413442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-23 07:43:17.413456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-23 07:43:17.413470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-23 07:43:17.413479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-23 07:43:17.413504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-23 07:43:17.413512 | orchestrator | 2025-09-23 07:43:17.413520 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-09-23 07:43:17.413528 | orchestrator | Tuesday 23 September 2025 07:39:36 +0000 (0:00:04.292) 0:02:43.331 ***** 2025-09-23 07:43:17.413541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-23 07:43:17.413555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-23 07:43:17.413568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-23 07:43:17.413577 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.413585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-23 07:43:17.413594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-23 07:43:17.413602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-23 07:43:17.413618 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.413630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-23 07:43:17.413643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-23 07:43:17.413651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-23 07:43:17.413659 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.413667 | orchestrator | 2025-09-23 07:43:17.413675 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-09-23 07:43:17.413683 | orchestrator | Tuesday 23 September 2025 07:39:37 +0000 (0:00:00.953) 0:02:44.285 ***** 2025-09-23 07:43:17.413691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-23 07:43:17.413699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-23 07:43:17.413707 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.413715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-23 07:43:17.413724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-23 07:43:17.413737 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.413745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-23 07:43:17.413754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-23 07:43:17.413761 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.413769 | orchestrator | 2025-09-23 07:43:17.413777 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-09-23 07:43:17.413785 | orchestrator | Tuesday 23 September 2025 07:39:38 +0000 (0:00:00.889) 0:02:45.175 ***** 2025-09-23 07:43:17.413792 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:43:17.413800 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:43:17.413808 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:43:17.413815 | orchestrator | 2025-09-23 07:43:17.413823 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-09-23 07:43:17.413834 | orchestrator | Tuesday 23 September 2025 07:39:39 +0000 (0:00:01.337) 0:02:46.512 ***** 2025-09-23 07:43:17.413842 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:43:17.413850 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:43:17.413858 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:43:17.413865 | orchestrator | 2025-09-23 07:43:17.413873 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-09-23 07:43:17.413881 | orchestrator | Tuesday 23 September 2025 07:39:42 +0000 (0:00:02.104) 0:02:48.617 ***** 2025-09-23 07:43:17.413889 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.413896 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.413904 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.413912 | orchestrator | 2025-09-23 07:43:17.413920 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-09-23 07:43:17.413927 | orchestrator | Tuesday 23 September 2025 07:39:42 +0000 (0:00:00.445) 0:02:49.062 ***** 2025-09-23 07:43:17.413935 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:43:17.413943 | orchestrator | 2025-09-23 07:43:17.413950 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-09-23 07:43:17.413958 | orchestrator | Tuesday 23 September 2025 07:39:43 +0000 (0:00:00.963) 0:02:50.026 ***** 2025-09-23 07:43:17.413971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-23 07:43:17.413980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.413994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-23 07:43:17.414006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.414171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-23 07:43:17.414202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.414210 | orchestrator | 2025-09-23 07:43:17.414219 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-09-23 07:43:17.414226 | orchestrator | Tuesday 23 September 2025 07:39:46 +0000 (0:00:03.394) 0:02:53.421 ***** 2025-09-23 07:43:17.414235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-23 07:43:17.414253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.414261 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.414277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-23 07:43:17.414291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-23 07:43:17.414299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.414313 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.414321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.414329 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.414336 | orchestrator | 2025-09-23 07:43:17.414344 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-09-23 07:43:17.414352 | orchestrator | Tuesday 23 September 2025 07:39:47 +0000 (0:00:00.891) 0:02:54.312 ***** 2025-09-23 07:43:17.414360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-23 07:43:17.414368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-23 07:43:17.414376 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.414384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-23 07:43:17.414392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-23 07:43:17.414400 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.414408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-23 07:43:17.414419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-23 07:43:17.414427 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.414434 | orchestrator | 2025-09-23 07:43:17.414442 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-09-23 07:43:17.414450 | orchestrator | Tuesday 23 September 2025 07:39:48 +0000 (0:00:00.844) 0:02:55.157 ***** 2025-09-23 07:43:17.414458 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:43:17.414465 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:43:17.414473 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:43:17.414481 | orchestrator | 2025-09-23 07:43:17.414508 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-09-23 07:43:17.414517 | orchestrator | Tuesday 23 September 2025 07:39:49 +0000 (0:00:01.261) 0:02:56.419 ***** 2025-09-23 07:43:17.414524 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:43:17.414532 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:43:17.414540 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:43:17.414547 | orchestrator | 2025-09-23 07:43:17.414555 | orchestrator | TASK [include_role : manila] *************************************************** 2025-09-23 07:43:17.414563 | orchestrator | Tuesday 23 September 2025 07:39:51 +0000 (0:00:02.030) 0:02:58.450 ***** 2025-09-23 07:43:17.414576 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:43:17.414589 | orchestrator | 2025-09-23 07:43:17.414597 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-09-23 07:43:17.414604 | orchestrator | Tuesday 23 September 2025 07:39:53 +0000 (0:00:01.344) 0:02:59.794 ***** 2025-09-23 07:43:17.414613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-23 07:43:17.414621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.414629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-23 07:43:17.414637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.414649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.414662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.414676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.414684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.414692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-23 07:43:17.414700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.414712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.414725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.414738 | orchestrator | 2025-09-23 07:43:17.414746 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-09-23 07:43:17.414754 | orchestrator | Tuesday 23 September 2025 07:39:57 +0000 (0:00:04.341) 0:03:04.136 ***** 2025-09-23 07:43:17.414762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-23 07:43:17.414770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.414780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-23 07:43:17.414790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.414803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.414821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.414831 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.414840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.414850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.414858 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.414866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-23 07:43:17.414874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.414886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.414905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.414913 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.414921 | orchestrator | 2025-09-23 07:43:17.414929 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-09-23 07:43:17.414937 | orchestrator | Tuesday 23 September 2025 07:39:58 +0000 (0:00:00.980) 0:03:05.116 ***** 2025-09-23 07:43:17.414945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-23 07:43:17.414952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-23 07:43:17.414960 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.414968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-23 07:43:17.414976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-23 07:43:17.414984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-23 07:43:17.414992 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.415000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-23 07:43:17.415008 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.415016 | orchestrator | 2025-09-23 07:43:17.415023 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-09-23 07:43:17.415031 | orchestrator | Tuesday 23 September 2025 07:40:00 +0000 (0:00:01.682) 0:03:06.799 ***** 2025-09-23 07:43:17.415039 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:43:17.415047 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:43:17.415054 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:43:17.415062 | orchestrator | 2025-09-23 07:43:17.415070 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-09-23 07:43:17.415078 | orchestrator | Tuesday 23 September 2025 07:40:01 +0000 (0:00:01.419) 0:03:08.218 ***** 2025-09-23 07:43:17.415085 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:43:17.415093 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:43:17.415101 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:43:17.415108 | orchestrator | 2025-09-23 07:43:17.415116 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-09-23 07:43:17.415124 | orchestrator | Tuesday 23 September 2025 07:40:03 +0000 (0:00:02.245) 0:03:10.463 ***** 2025-09-23 07:43:17.415137 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:43:17.415145 | orchestrator | 2025-09-23 07:43:17.415153 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-09-23 07:43:17.415160 | orchestrator | Tuesday 23 September 2025 07:40:05 +0000 (0:00:01.366) 0:03:11.829 ***** 2025-09-23 07:43:17.415168 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-23 07:43:17.415176 | orchestrator | 2025-09-23 07:43:17.415184 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-09-23 07:43:17.415191 | orchestrator | Tuesday 23 September 2025 07:40:07 +0000 (0:00:02.754) 0:03:14.584 ***** 2025-09-23 07:43:17.415209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-23 07:43:17.415219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-23 07:43:17.415228 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.415236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-23 07:43:17.415256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-23 07:43:17.415264 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.415279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-23 07:43:17.415288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-23 07:43:17.415301 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.415309 | orchestrator | 2025-09-23 07:43:17.415316 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-09-23 07:43:17.415324 | orchestrator | Tuesday 23 September 2025 07:40:09 +0000 (0:00:01.989) 0:03:16.574 ***** 2025-09-23 07:43:17.415336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-23 07:43:17.415350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-23 07:43:17.415358 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.415367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-23 07:43:17.415381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-23 07:43:17.415389 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.415406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-23 07:43:17.415415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-23 07:43:17.415423 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.415431 | orchestrator | 2025-09-23 07:43:17.415439 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-09-23 07:43:17.415451 | orchestrator | Tuesday 23 September 2025 07:40:12 +0000 (0:00:02.268) 0:03:18.842 ***** 2025-09-23 07:43:17.415460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-23 07:43:17.415468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-23 07:43:17.415476 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.415533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-23 07:43:17.415543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-23 07:43:17.415551 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.415564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-23 07:43:17.415573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-23 07:43:17.415581 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.415588 | orchestrator | 2025-09-23 07:43:17.415596 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-09-23 07:43:17.415609 | orchestrator | Tuesday 23 September 2025 07:40:15 +0000 (0:00:02.819) 0:03:21.662 ***** 2025-09-23 07:43:17.415617 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:43:17.415625 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:43:17.415633 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:43:17.415640 | orchestrator | 2025-09-23 07:43:17.415648 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-09-23 07:43:17.415656 | orchestrator | Tuesday 23 September 2025 07:40:17 +0000 (0:00:01.965) 0:03:23.627 ***** 2025-09-23 07:43:17.415664 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.415672 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.415679 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.415687 | orchestrator | 2025-09-23 07:43:17.415695 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-09-23 07:43:17.415702 | orchestrator | Tuesday 23 September 2025 07:40:18 +0000 (0:00:01.425) 0:03:25.053 ***** 2025-09-23 07:43:17.415709 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.415715 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.415722 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.415728 | orchestrator | 2025-09-23 07:43:17.415735 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-09-23 07:43:17.415742 | orchestrator | Tuesday 23 September 2025 07:40:18 +0000 (0:00:00.353) 0:03:25.406 ***** 2025-09-23 07:43:17.415748 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:43:17.415755 | orchestrator | 2025-09-23 07:43:17.415761 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-09-23 07:43:17.415768 | orchestrator | Tuesday 23 September 2025 07:40:20 +0000 (0:00:01.428) 0:03:26.835 ***** 2025-09-23 07:43:17.415775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-23 07:43:17.415786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-23 07:43:17.415797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-23 07:43:17.415809 | orchestrator | 2025-09-23 07:43:17.415815 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-09-23 07:43:17.415822 | orchestrator | Tuesday 23 September 2025 07:40:21 +0000 (0:00:01.532) 0:03:28.367 ***** 2025-09-23 07:43:17.415829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-23 07:43:17.415836 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.415842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-23 07:43:17.415849 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.415856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-23 07:43:17.415863 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.415870 | orchestrator | 2025-09-23 07:43:17.415876 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-09-23 07:43:17.415886 | orchestrator | Tuesday 23 September 2025 07:40:22 +0000 (0:00:00.401) 0:03:28.769 ***** 2025-09-23 07:43:17.415893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-23 07:43:17.415901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-23 07:43:17.415907 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.415914 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.415924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-23 07:43:17.415935 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.415941 | orchestrator | 2025-09-23 07:43:17.415948 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-09-23 07:43:17.415955 | orchestrator | Tuesday 23 September 2025 07:40:22 +0000 (0:00:00.619) 0:03:29.388 ***** 2025-09-23 07:43:17.415961 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.415968 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.415974 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.415981 | orchestrator | 2025-09-23 07:43:17.415988 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-09-23 07:43:17.415994 | orchestrator | Tuesday 23 September 2025 07:40:23 +0000 (0:00:00.797) 0:03:30.186 ***** 2025-09-23 07:43:17.416001 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.416007 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.416014 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.416021 | orchestrator | 2025-09-23 07:43:17.416027 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-09-23 07:43:17.416034 | orchestrator | Tuesday 23 September 2025 07:40:24 +0000 (0:00:01.293) 0:03:31.479 ***** 2025-09-23 07:43:17.416040 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.416047 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.416053 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.416060 | orchestrator | 2025-09-23 07:43:17.416066 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-09-23 07:43:17.416073 | orchestrator | Tuesday 23 September 2025 07:40:25 +0000 (0:00:00.318) 0:03:31.797 ***** 2025-09-23 07:43:17.416080 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:43:17.416086 | orchestrator | 2025-09-23 07:43:17.416093 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-09-23 07:43:17.416100 | orchestrator | Tuesday 23 September 2025 07:40:26 +0000 (0:00:01.423) 0:03:33.221 ***** 2025-09-23 07:43:17.416106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-23 07:43:17.416114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.416124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.416140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.416147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-23 07:43:17.416154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.416162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-23 07:43:17.416169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-23 07:43:17.416180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.416197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-23 07:43:17.416204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-23 07:43:17.416211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.416218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-23 07:43:17.416225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.416236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-23 07:43:17.416329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.416355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.416363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.416371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-23 07:43:17.416378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-23 07:43:17.416394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-23 07:43:17.416406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.416413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-23 07:43:17.416421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-23 07:43:17.416428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.416435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-23 07:43:17.416450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.416457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-23 07:43:17.416467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-23 07:43:17.416474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.416481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-23 07:43:17.416502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-23 07:43:17.416517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-23 07:43:17.416528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.416535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.416542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.416549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-23 07:43:17.416560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.416570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-23 07:43:17.416577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-23 07:43:17.416588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.416595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-23 07:43:17.416602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.416609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-23 07:43:17.416620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-23 07:43:17.416630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.416641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-23 07:43:17.416648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-23 07:43:17.416655 | orchestrator | 2025-09-23 07:43:17.416662 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-09-23 07:43:17.416668 | orchestrator | Tuesday 23 September 2025 07:40:30 +0000 (0:00:04.350) 0:03:37.572 ***** 2025-09-23 07:43:17.416675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-23 07:43:17.416687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.416697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.416708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.416715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-23 07:43:17.416722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.416737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-23 07:43:17.416747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.416757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.416765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.416772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-23 07:43:17.416779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-23 07:43:17.416790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-23 07:43:17.416797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.416807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.416818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-23 07:43:17.416825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-23 07:43:17.416832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-23 07:43:17.416843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.416850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.416860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-23 07:43:17.416867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-23 07:43:17.416878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-23 07:43:17.416885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.416897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-23 07:43:17.416904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.416910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-23 07:43:17.416921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.416931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-23 07:43:17.416938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-23 07:43:17.416950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.416958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.416969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.416977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-23 07:43:17.416989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-23 07:43:17.417001 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.417009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-23 07:43:17.417017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-23 07:43:17.417024 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.417032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.417043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-23 07:43:17.417051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-23 07:43:17.417063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.417075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-23 07:43:17.417083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.417091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-23 07:43:17.417099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-23 07:43:17.417110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.417121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-23 07:43:17.417133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-23 07:43:17.417141 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.417149 | orchestrator | 2025-09-23 07:43:17.417157 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-09-23 07:43:17.417164 | orchestrator | Tuesday 23 September 2025 07:40:32 +0000 (0:00:01.561) 0:03:39.134 ***** 2025-09-23 07:43:17.417172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-23 07:43:17.417180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-23 07:43:17.417188 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.417196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-23 07:43:17.417203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-23 07:43:17.417211 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.417218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-23 07:43:17.417226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-23 07:43:17.417233 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.417241 | orchestrator | 2025-09-23 07:43:17.417248 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-09-23 07:43:17.417256 | orchestrator | Tuesday 23 September 2025 07:40:34 +0000 (0:00:02.099) 0:03:41.233 ***** 2025-09-23 07:43:17.417263 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:43:17.417271 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:43:17.417279 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:43:17.417286 | orchestrator | 2025-09-23 07:43:17.417293 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-09-23 07:43:17.417301 | orchestrator | Tuesday 23 September 2025 07:40:35 +0000 (0:00:01.351) 0:03:42.584 ***** 2025-09-23 07:43:17.417308 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:43:17.417320 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:43:17.417327 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:43:17.417333 | orchestrator | 2025-09-23 07:43:17.417340 | orchestrator | TASK [include_role : placement] ************************************************ 2025-09-23 07:43:17.417346 | orchestrator | Tuesday 23 September 2025 07:40:38 +0000 (0:00:02.102) 0:03:44.687 ***** 2025-09-23 07:43:17.417353 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:43:17.417360 | orchestrator | 2025-09-23 07:43:17.417366 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-09-23 07:43:17.417377 | orchestrator | Tuesday 23 September 2025 07:40:39 +0000 (0:00:01.214) 0:03:45.902 ***** 2025-09-23 07:43:17.417389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-23 07:43:17.417396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-23 07:43:17.417403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-23 07:43:17.417410 | orchestrator | 2025-09-23 07:43:17.417417 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-09-23 07:43:17.417423 | orchestrator | Tuesday 23 September 2025 07:40:43 +0000 (0:00:03.934) 0:03:49.836 ***** 2025-09-23 07:43:17.417434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-23 07:43:17.417444 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.417455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-23 07:43:17.417462 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.417469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-23 07:43:17.417475 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.417482 | orchestrator | 2025-09-23 07:43:17.417503 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-09-23 07:43:17.417510 | orchestrator | Tuesday 23 September 2025 07:40:43 +0000 (0:00:00.526) 0:03:50.362 ***** 2025-09-23 07:43:17.417517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-23 07:43:17.417524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-23 07:43:17.417530 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.417537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-23 07:43:17.417544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-23 07:43:17.417551 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.417557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-23 07:43:17.417564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-23 07:43:17.417571 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.417582 | orchestrator | 2025-09-23 07:43:17.417588 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-09-23 07:43:17.417595 | orchestrator | Tuesday 23 September 2025 07:40:44 +0000 (0:00:00.784) 0:03:51.147 ***** 2025-09-23 07:43:17.417602 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:43:17.417609 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:43:17.417615 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:43:17.417622 | orchestrator | 2025-09-23 07:43:17.417628 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-09-23 07:43:17.417638 | orchestrator | Tuesday 23 September 2025 07:40:45 +0000 (0:00:01.414) 0:03:52.561 ***** 2025-09-23 07:43:17.417645 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:43:17.417651 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:43:17.417658 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:43:17.417665 | orchestrator | 2025-09-23 07:43:17.417671 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-09-23 07:43:17.417678 | orchestrator | Tuesday 23 September 2025 07:40:48 +0000 (0:00:02.247) 0:03:54.809 ***** 2025-09-23 07:43:17.417684 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:43:17.417691 | orchestrator | 2025-09-23 07:43:17.417697 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-09-23 07:43:17.417704 | orchestrator | Tuesday 23 September 2025 07:40:49 +0000 (0:00:01.552) 0:03:56.361 ***** 2025-09-23 07:43:17.417806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-23 07:43:17.417818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.417825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.417837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-23 07:43:17.417850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.417876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.417884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-23 07:43:17.417892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.417903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.417910 | orchestrator | 2025-09-23 07:43:17.417916 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-09-23 07:43:17.417923 | orchestrator | Tuesday 23 September 2025 07:40:54 +0000 (0:00:04.363) 0:04:00.725 ***** 2025-09-23 07:43:17.417950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-23 07:43:17.417959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.417966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.417973 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.417980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-23 07:43:17.417992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.418002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.418010 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.418065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-23 07:43:17.418074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.418081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.418092 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.418099 | orchestrator | 2025-09-23 07:43:17.418106 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-09-23 07:43:17.418112 | orchestrator | Tuesday 23 September 2025 07:40:55 +0000 (0:00:01.311) 0:04:02.036 ***** 2025-09-23 07:43:17.418119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-23 07:43:17.418127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-23 07:43:17.418134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-23 07:43:17.418141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-23 07:43:17.418152 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.418159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-23 07:43:17.418166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-23 07:43:17.418172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-23 07:43:17.418179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-23 07:43:17.418210 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.418222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-23 07:43:17.418233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-23 07:43:17.418251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-23 07:43:17.418262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-23 07:43:17.418273 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.418284 | orchestrator | 2025-09-23 07:43:17.418303 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-09-23 07:43:17.418314 | orchestrator | Tuesday 23 September 2025 07:40:56 +0000 (0:00:00.948) 0:04:02.985 ***** 2025-09-23 07:43:17.418325 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:43:17.418336 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:43:17.418345 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:43:17.418352 | orchestrator | 2025-09-23 07:43:17.418358 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-09-23 07:43:17.418365 | orchestrator | Tuesday 23 September 2025 07:40:57 +0000 (0:00:01.383) 0:04:04.368 ***** 2025-09-23 07:43:17.418371 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:43:17.418378 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:43:17.418384 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:43:17.418391 | orchestrator | 2025-09-23 07:43:17.418397 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-09-23 07:43:17.418404 | orchestrator | Tuesday 23 September 2025 07:40:59 +0000 (0:00:02.122) 0:04:06.490 ***** 2025-09-23 07:43:17.418410 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:43:17.418418 | orchestrator | 2025-09-23 07:43:17.418425 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-09-23 07:43:17.418433 | orchestrator | Tuesday 23 September 2025 07:41:01 +0000 (0:00:01.526) 0:04:08.017 ***** 2025-09-23 07:43:17.418440 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-09-23 07:43:17.418449 | orchestrator | 2025-09-23 07:43:17.418457 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-09-23 07:43:17.418464 | orchestrator | Tuesday 23 September 2025 07:41:02 +0000 (0:00:00.851) 0:04:08.868 ***** 2025-09-23 07:43:17.418472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-23 07:43:17.418545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-23 07:43:17.418556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-23 07:43:17.418565 | orchestrator | 2025-09-23 07:43:17.418573 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-09-23 07:43:17.418581 | orchestrator | Tuesday 23 September 2025 07:41:06 +0000 (0:00:04.487) 0:04:13.356 ***** 2025-09-23 07:43:17.418619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-23 07:43:17.418635 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.418644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-23 07:43:17.418652 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.418660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-23 07:43:17.418668 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.418676 | orchestrator | 2025-09-23 07:43:17.418683 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-09-23 07:43:17.418690 | orchestrator | Tuesday 23 September 2025 07:41:08 +0000 (0:00:01.496) 0:04:14.853 ***** 2025-09-23 07:43:17.418697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-23 07:43:17.418705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-23 07:43:17.418713 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.418720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-23 07:43:17.418728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-23 07:43:17.418735 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.418742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-23 07:43:17.418752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-23 07:43:17.418760 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.418767 | orchestrator | 2025-09-23 07:43:17.418774 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-23 07:43:17.418781 | orchestrator | Tuesday 23 September 2025 07:41:09 +0000 (0:00:01.665) 0:04:16.518 ***** 2025-09-23 07:43:17.418787 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:43:17.418794 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:43:17.418802 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:43:17.418809 | orchestrator | 2025-09-23 07:43:17.418820 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-23 07:43:17.418827 | orchestrator | Tuesday 23 September 2025 07:41:12 +0000 (0:00:02.615) 0:04:19.134 ***** 2025-09-23 07:43:17.418833 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:43:17.418840 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:43:17.418846 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:43:17.418852 | orchestrator | 2025-09-23 07:43:17.418858 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-09-23 07:43:17.418865 | orchestrator | Tuesday 23 September 2025 07:41:15 +0000 (0:00:03.169) 0:04:22.304 ***** 2025-09-23 07:43:17.418890 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-09-23 07:43:17.418897 | orchestrator | 2025-09-23 07:43:17.418904 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-09-23 07:43:17.418910 | orchestrator | Tuesday 23 September 2025 07:41:17 +0000 (0:00:01.465) 0:04:23.770 ***** 2025-09-23 07:43:17.418916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-23 07:43:17.418923 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.418929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-23 07:43:17.418936 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.418942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-23 07:43:17.418948 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.418954 | orchestrator | 2025-09-23 07:43:17.418961 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-09-23 07:43:17.418967 | orchestrator | Tuesday 23 September 2025 07:41:18 +0000 (0:00:01.273) 0:04:25.043 ***** 2025-09-23 07:43:17.418973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-23 07:43:17.418979 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.418989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-23 07:43:17.419000 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.419006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-23 07:43:17.419013 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.419019 | orchestrator | 2025-09-23 07:43:17.419025 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-09-23 07:43:17.419031 | orchestrator | Tuesday 23 September 2025 07:41:19 +0000 (0:00:01.357) 0:04:26.401 ***** 2025-09-23 07:43:17.419037 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.419043 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.419049 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.419055 | orchestrator | 2025-09-23 07:43:17.419079 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-23 07:43:17.419086 | orchestrator | Tuesday 23 September 2025 07:41:21 +0000 (0:00:02.041) 0:04:28.443 ***** 2025-09-23 07:43:17.419092 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:43:17.419099 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:43:17.419105 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:43:17.419111 | orchestrator | 2025-09-23 07:43:17.419117 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-23 07:43:17.419123 | orchestrator | Tuesday 23 September 2025 07:41:24 +0000 (0:00:02.361) 0:04:30.804 ***** 2025-09-23 07:43:17.419129 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:43:17.419135 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:43:17.419141 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:43:17.419147 | orchestrator | 2025-09-23 07:43:17.419153 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-09-23 07:43:17.419159 | orchestrator | Tuesday 23 September 2025 07:41:27 +0000 (0:00:02.964) 0:04:33.769 ***** 2025-09-23 07:43:17.419165 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-09-23 07:43:17.419172 | orchestrator | 2025-09-23 07:43:17.419178 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-09-23 07:43:17.419184 | orchestrator | Tuesday 23 September 2025 07:41:28 +0000 (0:00:00.854) 0:04:34.624 ***** 2025-09-23 07:43:17.419190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-23 07:43:17.419196 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.419203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-23 07:43:17.419214 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.419220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-23 07:43:17.419226 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.419233 | orchestrator | 2025-09-23 07:43:17.419239 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-09-23 07:43:17.419245 | orchestrator | Tuesday 23 September 2025 07:41:29 +0000 (0:00:01.353) 0:04:35.977 ***** 2025-09-23 07:43:17.419255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-23 07:43:17.419261 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.419268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-23 07:43:17.419274 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.419299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-23 07:43:17.419307 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.419313 | orchestrator | 2025-09-23 07:43:17.419319 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-09-23 07:43:17.419325 | orchestrator | Tuesday 23 September 2025 07:41:30 +0000 (0:00:01.360) 0:04:37.337 ***** 2025-09-23 07:43:17.419331 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.419338 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.419344 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.419350 | orchestrator | 2025-09-23 07:43:17.419356 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-23 07:43:17.419362 | orchestrator | Tuesday 23 September 2025 07:41:32 +0000 (0:00:01.633) 0:04:38.971 ***** 2025-09-23 07:43:17.419368 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:43:17.419374 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:43:17.419380 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:43:17.419386 | orchestrator | 2025-09-23 07:43:17.419393 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-23 07:43:17.419399 | orchestrator | Tuesday 23 September 2025 07:41:34 +0000 (0:00:02.461) 0:04:41.432 ***** 2025-09-23 07:43:17.419410 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:43:17.419416 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:43:17.419422 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:43:17.419428 | orchestrator | 2025-09-23 07:43:17.419434 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-09-23 07:43:17.419440 | orchestrator | Tuesday 23 September 2025 07:41:38 +0000 (0:00:03.342) 0:04:44.775 ***** 2025-09-23 07:43:17.419446 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:43:17.419453 | orchestrator | 2025-09-23 07:43:17.419459 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-09-23 07:43:17.419465 | orchestrator | Tuesday 23 September 2025 07:41:39 +0000 (0:00:01.586) 0:04:46.362 ***** 2025-09-23 07:43:17.419472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-23 07:43:17.419478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-23 07:43:17.419501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-23 07:43:17.419528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-23 07:43:17.419536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-23 07:43:17.419547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.419554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-23 07:43:17.419560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-23 07:43:17.419570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-23 07:43:17.419594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.419602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-23 07:43:17.419613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-23 07:43:17.419620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-23 07:43:17.419626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-23 07:43:17.419633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.419639 | orchestrator | 2025-09-23 07:43:17.419645 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-09-23 07:43:17.419651 | orchestrator | Tuesday 23 September 2025 07:41:43 +0000 (0:00:03.619) 0:04:49.981 ***** 2025-09-23 07:43:17.419675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-23 07:43:17.419688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-23 07:43:17.419694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-23 07:43:17.419700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-23 07:43:17.419760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.419776 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.419786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-23 07:43:17.419815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-23 07:43:17.419823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-23 07:43:17.419834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-23 07:43:17.419841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.419847 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.419853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-23 07:43:17.419863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-23 07:43:17.419869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-23 07:43:17.419895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-23 07:43:17.419907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-23 07:43:17.419913 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.419919 | orchestrator | 2025-09-23 07:43:17.419925 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-09-23 07:43:17.419932 | orchestrator | Tuesday 23 September 2025 07:41:44 +0000 (0:00:00.749) 0:04:50.730 ***** 2025-09-23 07:43:17.419938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-23 07:43:17.419945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-23 07:43:17.419951 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.419958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-23 07:43:17.419964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-23 07:43:17.419970 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.419976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-23 07:43:17.419982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-23 07:43:17.419989 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.419995 | orchestrator | 2025-09-23 07:43:17.420001 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-09-23 07:43:17.420007 | orchestrator | Tuesday 23 September 2025 07:41:45 +0000 (0:00:01.547) 0:04:52.278 ***** 2025-09-23 07:43:17.420013 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:43:17.420019 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:43:17.420025 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:43:17.420031 | orchestrator | 2025-09-23 07:43:17.420037 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-09-23 07:43:17.420044 | orchestrator | Tuesday 23 September 2025 07:41:47 +0000 (0:00:01.477) 0:04:53.756 ***** 2025-09-23 07:43:17.420049 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:43:17.420059 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:43:17.420066 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:43:17.420076 | orchestrator | 2025-09-23 07:43:17.420082 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-09-23 07:43:17.420089 | orchestrator | Tuesday 23 September 2025 07:41:49 +0000 (0:00:02.203) 0:04:55.960 ***** 2025-09-23 07:43:17.420095 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:43:17.420101 | orchestrator | 2025-09-23 07:43:17.420107 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-09-23 07:43:17.420113 | orchestrator | Tuesday 23 September 2025 07:41:50 +0000 (0:00:01.367) 0:04:57.328 ***** 2025-09-23 07:43:17.420138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-23 07:43:17.420147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-23 07:43:17.420153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-23 07:43:17.420160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-23 07:43:17.420193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-23 07:43:17.420202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-23 07:43:17.420209 | orchestrator | 2025-09-23 07:43:17.420215 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-09-23 07:43:17.420221 | orchestrator | Tuesday 23 September 2025 07:41:56 +0000 (0:00:05.552) 0:05:02.880 ***** 2025-09-23 07:43:17.420227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-23 07:43:17.420234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-23 07:43:17.420249 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.420256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-23 07:43:17.420281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-23 07:43:17.420289 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.420295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-23 07:43:17.420302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-23 07:43:17.420313 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.420319 | orchestrator | 2025-09-23 07:43:17.420325 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-09-23 07:43:17.420332 | orchestrator | Tuesday 23 September 2025 07:41:56 +0000 (0:00:00.687) 0:05:03.568 ***** 2025-09-23 07:43:17.420341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-23 07:43:17.420348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-23 07:43:17.420354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-23 07:43:17.420360 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.420367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-23 07:43:17.420390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-23 07:43:17.420398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-23 07:43:17.420404 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.420410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-23 07:43:17.420416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-23 07:43:17.420423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-23 07:43:17.420429 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.420435 | orchestrator | 2025-09-23 07:43:17.420441 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-09-23 07:43:17.420447 | orchestrator | Tuesday 23 September 2025 07:41:57 +0000 (0:00:00.927) 0:05:04.495 ***** 2025-09-23 07:43:17.420453 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.420459 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.420466 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.420471 | orchestrator | 2025-09-23 07:43:17.420478 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-09-23 07:43:17.420499 | orchestrator | Tuesday 23 September 2025 07:41:58 +0000 (0:00:00.833) 0:05:05.329 ***** 2025-09-23 07:43:17.420505 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.420511 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.420522 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.420528 | orchestrator | 2025-09-23 07:43:17.420534 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-09-23 07:43:17.420540 | orchestrator | Tuesday 23 September 2025 07:41:59 +0000 (0:00:01.158) 0:05:06.488 ***** 2025-09-23 07:43:17.420546 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:43:17.420553 | orchestrator | 2025-09-23 07:43:17.420559 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-09-23 07:43:17.420565 | orchestrator | Tuesday 23 September 2025 07:42:01 +0000 (0:00:01.345) 0:05:07.833 ***** 2025-09-23 07:43:17.420571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-23 07:43:17.420583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-23 07:43:17.420590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:43:17.420617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:43:17.420625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-23 07:43:17.420632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-23 07:43:17.420642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-23 07:43:17.420649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:43:17.420658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-23 07:43:17.420665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:43:17.420690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-23 07:43:17.420697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-23 07:43:17.420704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:43:17.420715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:43:17.420721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-23 07:43:17.420731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-23 07:43:17.420741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-23 07:43:17.420749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:43:17.420755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:43:17.420766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-23 07:43:17.420772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-23 07:43:17.420782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-23 07:43:17.420793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-23 07:43:17.420800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:43:17.420810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-23 07:43:17.420816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:43:17.420823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:43:17.420832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-23 07:43:17.420839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:43:17.420848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-23 07:43:17.420855 | orchestrator | 2025-09-23 07:43:17.420861 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-09-23 07:43:17.420868 | orchestrator | Tuesday 23 September 2025 07:42:05 +0000 (0:00:04.064) 0:05:11.897 ***** 2025-09-23 07:43:17.420874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-23 07:43:17.420885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-23 07:43:17.420891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:43:17.420898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:43:17.420907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-23 07:43:17.420917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-23 07:43:17.420924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-23 07:43:17.420936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:43:17.420943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:43:17.420949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-23 07:43:17.420958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-23 07:43:17.420965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-23 07:43:17.420971 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.420981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:43:17.420994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:43:17.421000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-23 07:43:17.421006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-23 07:43:17.421013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-23 07:43:17.421023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:43:17.421033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-23 07:43:17.421046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:43:17.421053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-23 07:43:17.421059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-23 07:43:17.421065 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.421072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:43:17.421078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:43:17.421087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-23 07:43:17.421097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-23 07:43:17.421108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-23 07:43:17.421114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:43:17.421121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:43:17.421127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-23 07:43:17.421133 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.421139 | orchestrator | 2025-09-23 07:43:17.421146 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-09-23 07:43:17.421152 | orchestrator | Tuesday 23 September 2025 07:42:06 +0000 (0:00:01.344) 0:05:13.242 ***** 2025-09-23 07:43:17.421158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-23 07:43:17.421168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-23 07:43:17.421174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-23 07:43:17.421189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-23 07:43:17.421196 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.421205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-23 07:43:17.421215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-23 07:43:17.421222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-23 07:43:17.421229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-23 07:43:17.421235 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.421241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-23 07:43:17.421247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-23 07:43:17.421254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-23 07:43:17.421260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-23 07:43:17.421266 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.421273 | orchestrator | 2025-09-23 07:43:17.421279 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-09-23 07:43:17.421285 | orchestrator | Tuesday 23 September 2025 07:42:07 +0000 (0:00:00.986) 0:05:14.229 ***** 2025-09-23 07:43:17.421291 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.421297 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.421303 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.421309 | orchestrator | 2025-09-23 07:43:17.421315 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-09-23 07:43:17.421321 | orchestrator | Tuesday 23 September 2025 07:42:08 +0000 (0:00:00.467) 0:05:14.696 ***** 2025-09-23 07:43:17.421327 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.421333 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.421339 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.421345 | orchestrator | 2025-09-23 07:43:17.421352 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-09-23 07:43:17.421358 | orchestrator | Tuesday 23 September 2025 07:42:09 +0000 (0:00:01.532) 0:05:16.228 ***** 2025-09-23 07:43:17.421364 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:43:17.421375 | orchestrator | 2025-09-23 07:43:17.421381 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-09-23 07:43:17.421387 | orchestrator | Tuesday 23 September 2025 07:42:11 +0000 (0:00:01.771) 0:05:17.999 ***** 2025-09-23 07:43:17.421397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-23 07:43:17.421408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-23 07:43:17.421415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-23 07:43:17.421422 | orchestrator | 2025-09-23 07:43:17.421428 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-09-23 07:43:17.421434 | orchestrator | Tuesday 23 September 2025 07:42:14 +0000 (0:00:02.788) 0:05:20.788 ***** 2025-09-23 07:43:17.421440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-23 07:43:17.421455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-23 07:43:17.421462 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.421468 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.421478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-23 07:43:17.421498 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.421504 | orchestrator | 2025-09-23 07:43:17.421510 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-09-23 07:43:17.421516 | orchestrator | Tuesday 23 September 2025 07:42:14 +0000 (0:00:00.410) 0:05:21.199 ***** 2025-09-23 07:43:17.421523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-23 07:43:17.421529 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.421535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-23 07:43:17.421541 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.421547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-23 07:43:17.421554 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.421560 | orchestrator | 2025-09-23 07:43:17.421566 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-09-23 07:43:17.421572 | orchestrator | Tuesday 23 September 2025 07:42:15 +0000 (0:00:01.099) 0:05:22.298 ***** 2025-09-23 07:43:17.421578 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.421584 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.421596 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.421602 | orchestrator | 2025-09-23 07:43:17.421608 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-09-23 07:43:17.421614 | orchestrator | Tuesday 23 September 2025 07:42:16 +0000 (0:00:00.481) 0:05:22.780 ***** 2025-09-23 07:43:17.421620 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.421627 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.421633 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.421639 | orchestrator | 2025-09-23 07:43:17.421645 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-09-23 07:43:17.421651 | orchestrator | Tuesday 23 September 2025 07:42:17 +0000 (0:00:01.437) 0:05:24.217 ***** 2025-09-23 07:43:17.421657 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:43:17.421663 | orchestrator | 2025-09-23 07:43:17.421669 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-09-23 07:43:17.421675 | orchestrator | Tuesday 23 September 2025 07:42:19 +0000 (0:00:01.829) 0:05:26.047 ***** 2025-09-23 07:43:17.421684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-23 07:43:17.421695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-23 07:43:17.421702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-23 07:43:17.421709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-23 07:43:17.421720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-23 07:43:17.421730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-23 07:43:17.421736 | orchestrator | 2025-09-23 07:43:17.421746 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-09-23 07:43:17.421752 | orchestrator | Tuesday 23 September 2025 07:42:25 +0000 (0:00:06.483) 0:05:32.531 ***** 2025-09-23 07:43:17.421759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-23 07:43:17.421765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-23 07:43:17.421775 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.421782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-23 07:43:17.421791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-23 07:43:17.421798 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.421807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-23 07:43:17.421814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-23 07:43:17.421827 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.421833 | orchestrator | 2025-09-23 07:43:17.421839 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-09-23 07:43:17.421845 | orchestrator | Tuesday 23 September 2025 07:42:26 +0000 (0:00:00.663) 0:05:33.194 ***** 2025-09-23 07:43:17.421852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-23 07:43:17.421858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-23 07:43:17.421864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-23 07:43:17.421871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-23 07:43:17.421877 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.421883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-23 07:43:17.421892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-23 07:43:17.421899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-23 07:43:17.421905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-23 07:43:17.421912 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.421918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-23 07:43:17.421927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-23 07:43:17.421934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-23 07:43:17.421940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-23 07:43:17.421950 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.421956 | orchestrator | 2025-09-23 07:43:17.421963 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-09-23 07:43:17.421969 | orchestrator | Tuesday 23 September 2025 07:42:28 +0000 (0:00:01.619) 0:05:34.813 ***** 2025-09-23 07:43:17.421975 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:43:17.421981 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:43:17.421987 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:43:17.421993 | orchestrator | 2025-09-23 07:43:17.421999 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-09-23 07:43:17.422006 | orchestrator | Tuesday 23 September 2025 07:42:29 +0000 (0:00:01.382) 0:05:36.196 ***** 2025-09-23 07:43:17.422012 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:43:17.422042 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:43:17.422049 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:43:17.422055 | orchestrator | 2025-09-23 07:43:17.422061 | orchestrator | TASK [include_role : swift] **************************************************** 2025-09-23 07:43:17.422067 | orchestrator | Tuesday 23 September 2025 07:42:31 +0000 (0:00:02.152) 0:05:38.349 ***** 2025-09-23 07:43:17.422073 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.422079 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.422085 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.422091 | orchestrator | 2025-09-23 07:43:17.422097 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-09-23 07:43:17.422104 | orchestrator | Tuesday 23 September 2025 07:42:32 +0000 (0:00:00.328) 0:05:38.677 ***** 2025-09-23 07:43:17.422110 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.422116 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.422122 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.422128 | orchestrator | 2025-09-23 07:43:17.422134 | orchestrator | TASK [include_role : trove] **************************************************** 2025-09-23 07:43:17.422140 | orchestrator | Tuesday 23 September 2025 07:42:32 +0000 (0:00:00.330) 0:05:39.007 ***** 2025-09-23 07:43:17.422146 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.422152 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.422158 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.422165 | orchestrator | 2025-09-23 07:43:17.422171 | orchestrator | TASK [include_role : venus] **************************************************** 2025-09-23 07:43:17.422177 | orchestrator | Tuesday 23 September 2025 07:42:33 +0000 (0:00:00.636) 0:05:39.644 ***** 2025-09-23 07:43:17.422183 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.422189 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.422195 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.422201 | orchestrator | 2025-09-23 07:43:17.422207 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-09-23 07:43:17.422214 | orchestrator | Tuesday 23 September 2025 07:42:33 +0000 (0:00:00.341) 0:05:39.985 ***** 2025-09-23 07:43:17.422220 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.422226 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.422232 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.422238 | orchestrator | 2025-09-23 07:43:17.422244 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-09-23 07:43:17.422250 | orchestrator | Tuesday 23 September 2025 07:42:33 +0000 (0:00:00.316) 0:05:40.302 ***** 2025-09-23 07:43:17.422256 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.422262 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.422268 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.422274 | orchestrator | 2025-09-23 07:43:17.422280 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-09-23 07:43:17.422287 | orchestrator | Tuesday 23 September 2025 07:42:34 +0000 (0:00:00.842) 0:05:41.145 ***** 2025-09-23 07:43:17.422293 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:43:17.422304 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:43:17.422310 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:43:17.422316 | orchestrator | 2025-09-23 07:43:17.422322 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-09-23 07:43:17.422331 | orchestrator | Tuesday 23 September 2025 07:42:35 +0000 (0:00:00.706) 0:05:41.852 ***** 2025-09-23 07:43:17.422337 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:43:17.422344 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:43:17.422350 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:43:17.422356 | orchestrator | 2025-09-23 07:43:17.422362 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-09-23 07:43:17.422368 | orchestrator | Tuesday 23 September 2025 07:42:35 +0000 (0:00:00.368) 0:05:42.220 ***** 2025-09-23 07:43:17.422375 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:43:17.422381 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:43:17.422387 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:43:17.422393 | orchestrator | 2025-09-23 07:43:17.422399 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-09-23 07:43:17.422405 | orchestrator | Tuesday 23 September 2025 07:42:36 +0000 (0:00:00.929) 0:05:43.149 ***** 2025-09-23 07:43:17.422411 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:43:17.422417 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:43:17.422424 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:43:17.422430 | orchestrator | 2025-09-23 07:43:17.422436 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-09-23 07:43:17.422442 | orchestrator | Tuesday 23 September 2025 07:42:37 +0000 (0:00:01.253) 0:05:44.403 ***** 2025-09-23 07:43:17.422448 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:43:17.422454 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:43:17.422464 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:43:17.422471 | orchestrator | 2025-09-23 07:43:17.422477 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-09-23 07:43:17.422519 | orchestrator | Tuesday 23 September 2025 07:42:38 +0000 (0:00:00.959) 0:05:45.362 ***** 2025-09-23 07:43:17.422526 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:43:17.422532 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:43:17.422539 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:43:17.422545 | orchestrator | 2025-09-23 07:43:17.422551 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-09-23 07:43:17.422557 | orchestrator | Tuesday 23 September 2025 07:42:49 +0000 (0:00:10.377) 0:05:55.740 ***** 2025-09-23 07:43:17.422563 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:43:17.422570 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:43:17.422576 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:43:17.422582 | orchestrator | 2025-09-23 07:43:17.422588 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-09-23 07:43:17.422593 | orchestrator | Tuesday 23 September 2025 07:42:49 +0000 (0:00:00.782) 0:05:56.522 ***** 2025-09-23 07:43:17.422598 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:43:17.422604 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:43:17.422609 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:43:17.422615 | orchestrator | 2025-09-23 07:43:17.422620 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-09-23 07:43:17.422625 | orchestrator | Tuesday 23 September 2025 07:42:58 +0000 (0:00:08.466) 0:06:04.989 ***** 2025-09-23 07:43:17.422631 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:43:17.422636 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:43:17.422641 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:43:17.422647 | orchestrator | 2025-09-23 07:43:17.422652 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-09-23 07:43:17.422657 | orchestrator | Tuesday 23 September 2025 07:43:02 +0000 (0:00:04.130) 0:06:09.120 ***** 2025-09-23 07:43:17.422663 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:43:17.422668 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:43:17.422674 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:43:17.422679 | orchestrator | 2025-09-23 07:43:17.422689 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-09-23 07:43:17.422694 | orchestrator | Tuesday 23 September 2025 07:43:06 +0000 (0:00:04.250) 0:06:13.371 ***** 2025-09-23 07:43:17.422700 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.422705 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.422710 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.422716 | orchestrator | 2025-09-23 07:43:17.422721 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-09-23 07:43:17.422727 | orchestrator | Tuesday 23 September 2025 07:43:07 +0000 (0:00:00.341) 0:06:13.712 ***** 2025-09-23 07:43:17.422732 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.422737 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.422743 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.422748 | orchestrator | 2025-09-23 07:43:17.422753 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-09-23 07:43:17.422759 | orchestrator | Tuesday 23 September 2025 07:43:07 +0000 (0:00:00.353) 0:06:14.066 ***** 2025-09-23 07:43:17.422764 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.422769 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.422775 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.422780 | orchestrator | 2025-09-23 07:43:17.422785 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-09-23 07:43:17.422791 | orchestrator | Tuesday 23 September 2025 07:43:08 +0000 (0:00:00.688) 0:06:14.755 ***** 2025-09-23 07:43:17.422796 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.422802 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.422807 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.422812 | orchestrator | 2025-09-23 07:43:17.422818 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-09-23 07:43:17.422823 | orchestrator | Tuesday 23 September 2025 07:43:08 +0000 (0:00:00.337) 0:06:15.092 ***** 2025-09-23 07:43:17.422829 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.422834 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.422839 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.422845 | orchestrator | 2025-09-23 07:43:17.422850 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-09-23 07:43:17.422855 | orchestrator | Tuesday 23 September 2025 07:43:08 +0000 (0:00:00.344) 0:06:15.436 ***** 2025-09-23 07:43:17.422861 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:43:17.422866 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:43:17.422871 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:43:17.422877 | orchestrator | 2025-09-23 07:43:17.422882 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-09-23 07:43:17.422887 | orchestrator | Tuesday 23 September 2025 07:43:09 +0000 (0:00:00.333) 0:06:15.770 ***** 2025-09-23 07:43:17.422896 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:43:17.422902 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:43:17.422907 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:43:17.422913 | orchestrator | 2025-09-23 07:43:17.422918 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-09-23 07:43:17.422924 | orchestrator | Tuesday 23 September 2025 07:43:14 +0000 (0:00:05.125) 0:06:20.896 ***** 2025-09-23 07:43:17.422929 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:43:17.422934 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:43:17.422940 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:43:17.422945 | orchestrator | 2025-09-23 07:43:17.422950 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:43:17.422956 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-23 07:43:17.422961 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-23 07:43:17.422967 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-23 07:43:17.422976 | orchestrator | 2025-09-23 07:43:17.422982 | orchestrator | 2025-09-23 07:43:17.422991 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:43:17.422996 | orchestrator | Tuesday 23 September 2025 07:43:15 +0000 (0:00:00.901) 0:06:21.797 ***** 2025-09-23 07:43:17.423002 | orchestrator | =============================================================================== 2025-09-23 07:43:17.423007 | orchestrator | loadbalancer : Start backup haproxy container -------------------------- 10.38s 2025-09-23 07:43:17.423012 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.47s 2025-09-23 07:43:17.423018 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 6.80s 2025-09-23 07:43:17.423023 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.48s 2025-09-23 07:43:17.423028 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.55s 2025-09-23 07:43:17.423033 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 5.13s 2025-09-23 07:43:17.423039 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.49s 2025-09-23 07:43:17.423044 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.43s 2025-09-23 07:43:17.423050 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.36s 2025-09-23 07:43:17.423055 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.35s 2025-09-23 07:43:17.423060 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.34s 2025-09-23 07:43:17.423065 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.29s 2025-09-23 07:43:17.423071 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 4.27s 2025-09-23 07:43:17.423076 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.25s 2025-09-23 07:43:17.423081 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.13s 2025-09-23 07:43:17.423086 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.06s 2025-09-23 07:43:17.423092 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 3.99s 2025-09-23 07:43:17.423097 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 3.95s 2025-09-23 07:43:17.423102 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.93s 2025-09-23 07:43:17.423108 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.91s 2025-09-23 07:43:17.423113 | orchestrator | 2025-09-23 07:43:17 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:43:17.423118 | orchestrator | 2025-09-23 07:43:17 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:43:17.423124 | orchestrator | 2025-09-23 07:43:17 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:43:20.452425 | orchestrator | 2025-09-23 07:43:20 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:43:20.453836 | orchestrator | 2025-09-23 07:43:20 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:43:20.455296 | orchestrator | 2025-09-23 07:43:20 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:43:20.455527 | orchestrator | 2025-09-23 07:43:20 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:43:23.517682 | orchestrator | 2025-09-23 07:43:23 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:43:23.517967 | orchestrator | 2025-09-23 07:43:23 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:43:23.519963 | orchestrator | 2025-09-23 07:43:23 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:43:23.520044 | orchestrator | 2025-09-23 07:43:23 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:43:26.553895 | orchestrator | 2025-09-23 07:43:26 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:43:26.554005 | orchestrator | 2025-09-23 07:43:26 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:43:26.554502 | orchestrator | 2025-09-23 07:43:26 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:43:26.554846 | orchestrator | 2025-09-23 07:43:26 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:43:29.588309 | orchestrator | 2025-09-23 07:43:29 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:43:29.593330 | orchestrator | 2025-09-23 07:43:29 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:43:29.593589 | orchestrator | 2025-09-23 07:43:29 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:43:29.593609 | orchestrator | 2025-09-23 07:43:29 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:43:32.632616 | orchestrator | 2025-09-23 07:43:32 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:43:32.632719 | orchestrator | 2025-09-23 07:43:32 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:43:32.633106 | orchestrator | 2025-09-23 07:43:32 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:43:32.633295 | orchestrator | 2025-09-23 07:43:32 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:43:35.681882 | orchestrator | 2025-09-23 07:43:35 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:43:35.682175 | orchestrator | 2025-09-23 07:43:35 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:43:35.684163 | orchestrator | 2025-09-23 07:43:35 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:43:35.684196 | orchestrator | 2025-09-23 07:43:35 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:43:38.717876 | orchestrator | 2025-09-23 07:43:38 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:43:38.717995 | orchestrator | 2025-09-23 07:43:38 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:43:38.718761 | orchestrator | 2025-09-23 07:43:38 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:43:38.718812 | orchestrator | 2025-09-23 07:43:38 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:43:41.762538 | orchestrator | 2025-09-23 07:43:41 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:43:41.763031 | orchestrator | 2025-09-23 07:43:41 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:43:41.764001 | orchestrator | 2025-09-23 07:43:41 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:43:41.766960 | orchestrator | 2025-09-23 07:43:41 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:43:44.812446 | orchestrator | 2025-09-23 07:43:44 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:43:44.813541 | orchestrator | 2025-09-23 07:43:44 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:43:44.815334 | orchestrator | 2025-09-23 07:43:44 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:43:44.815594 | orchestrator | 2025-09-23 07:43:44 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:43:47.864973 | orchestrator | 2025-09-23 07:43:47 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:43:47.866886 | orchestrator | 2025-09-23 07:43:47 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:43:47.869137 | orchestrator | 2025-09-23 07:43:47 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:43:47.869580 | orchestrator | 2025-09-23 07:43:47 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:43:50.912971 | orchestrator | 2025-09-23 07:43:50 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:43:50.913502 | orchestrator | 2025-09-23 07:43:50 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:43:50.915933 | orchestrator | 2025-09-23 07:43:50 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:43:50.916348 | orchestrator | 2025-09-23 07:43:50 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:43:53.962736 | orchestrator | 2025-09-23 07:43:53 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:43:53.967337 | orchestrator | 2025-09-23 07:43:53 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:43:53.969072 | orchestrator | 2025-09-23 07:43:53 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:43:53.969097 | orchestrator | 2025-09-23 07:43:53 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:43:57.012045 | orchestrator | 2025-09-23 07:43:57 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:43:57.013313 | orchestrator | 2025-09-23 07:43:57 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:43:57.013997 | orchestrator | 2025-09-23 07:43:57 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:43:57.014072 | orchestrator | 2025-09-23 07:43:57 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:44:00.057779 | orchestrator | 2025-09-23 07:44:00 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:44:00.059270 | orchestrator | 2025-09-23 07:44:00 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:44:00.061075 | orchestrator | 2025-09-23 07:44:00 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:44:00.062191 | orchestrator | 2025-09-23 07:44:00 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:44:03.109921 | orchestrator | 2025-09-23 07:44:03 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:44:03.111594 | orchestrator | 2025-09-23 07:44:03 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:44:03.112925 | orchestrator | 2025-09-23 07:44:03 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:44:03.113057 | orchestrator | 2025-09-23 07:44:03 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:44:06.150573 | orchestrator | 2025-09-23 07:44:06 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:44:06.152164 | orchestrator | 2025-09-23 07:44:06 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:44:06.154768 | orchestrator | 2025-09-23 07:44:06 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:44:06.154971 | orchestrator | 2025-09-23 07:44:06 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:44:09.198982 | orchestrator | 2025-09-23 07:44:09 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:44:09.200553 | orchestrator | 2025-09-23 07:44:09 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:44:09.200598 | orchestrator | 2025-09-23 07:44:09 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:44:09.200711 | orchestrator | 2025-09-23 07:44:09 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:44:12.250323 | orchestrator | 2025-09-23 07:44:12 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:44:12.253293 | orchestrator | 2025-09-23 07:44:12 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:44:12.255065 | orchestrator | 2025-09-23 07:44:12 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:44:12.255094 | orchestrator | 2025-09-23 07:44:12 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:44:15.320195 | orchestrator | 2025-09-23 07:44:15 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:44:15.320760 | orchestrator | 2025-09-23 07:44:15 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:44:15.322124 | orchestrator | 2025-09-23 07:44:15 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:44:15.322167 | orchestrator | 2025-09-23 07:44:15 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:44:18.361089 | orchestrator | 2025-09-23 07:44:18 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:44:18.362787 | orchestrator | 2025-09-23 07:44:18 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:44:18.363475 | orchestrator | 2025-09-23 07:44:18 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:44:18.363510 | orchestrator | 2025-09-23 07:44:18 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:44:21.411603 | orchestrator | 2025-09-23 07:44:21 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:44:21.413346 | orchestrator | 2025-09-23 07:44:21 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:44:21.418057 | orchestrator | 2025-09-23 07:44:21 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:44:21.418103 | orchestrator | 2025-09-23 07:44:21 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:44:24.456505 | orchestrator | 2025-09-23 07:44:24 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:44:24.457833 | orchestrator | 2025-09-23 07:44:24 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:44:24.460198 | orchestrator | 2025-09-23 07:44:24 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:44:24.460232 | orchestrator | 2025-09-23 07:44:24 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:44:27.511796 | orchestrator | 2025-09-23 07:44:27 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:44:27.514344 | orchestrator | 2025-09-23 07:44:27 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:44:27.515645 | orchestrator | 2025-09-23 07:44:27 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:44:27.515690 | orchestrator | 2025-09-23 07:44:27 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:44:30.555869 | orchestrator | 2025-09-23 07:44:30 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:44:30.557862 | orchestrator | 2025-09-23 07:44:30 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:44:30.558767 | orchestrator | 2025-09-23 07:44:30 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:44:30.558801 | orchestrator | 2025-09-23 07:44:30 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:44:33.601162 | orchestrator | 2025-09-23 07:44:33 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:44:33.603650 | orchestrator | 2025-09-23 07:44:33 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:44:33.606455 | orchestrator | 2025-09-23 07:44:33 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:44:33.606502 | orchestrator | 2025-09-23 07:44:33 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:44:36.645691 | orchestrator | 2025-09-23 07:44:36 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:44:36.652462 | orchestrator | 2025-09-23 07:44:36 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:44:36.652550 | orchestrator | 2025-09-23 07:44:36 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:44:36.652564 | orchestrator | 2025-09-23 07:44:36 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:44:39.697957 | orchestrator | 2025-09-23 07:44:39 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:44:39.700114 | orchestrator | 2025-09-23 07:44:39 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:44:39.702093 | orchestrator | 2025-09-23 07:44:39 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:44:39.702174 | orchestrator | 2025-09-23 07:44:39 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:44:42.757233 | orchestrator | 2025-09-23 07:44:42 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:44:42.757332 | orchestrator | 2025-09-23 07:44:42 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:44:42.758644 | orchestrator | 2025-09-23 07:44:42 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:44:42.758691 | orchestrator | 2025-09-23 07:44:42 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:44:45.816771 | orchestrator | 2025-09-23 07:44:45 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:44:45.816852 | orchestrator | 2025-09-23 07:44:45 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:44:45.817820 | orchestrator | 2025-09-23 07:44:45 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:44:45.818209 | orchestrator | 2025-09-23 07:44:45 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:44:48.871199 | orchestrator | 2025-09-23 07:44:48 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:44:48.873234 | orchestrator | 2025-09-23 07:44:48 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:44:48.875800 | orchestrator | 2025-09-23 07:44:48 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:44:48.875887 | orchestrator | 2025-09-23 07:44:48 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:44:51.916970 | orchestrator | 2025-09-23 07:44:51 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:44:51.918128 | orchestrator | 2025-09-23 07:44:51 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:44:51.920077 | orchestrator | 2025-09-23 07:44:51 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:44:51.920107 | orchestrator | 2025-09-23 07:44:51 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:44:54.968208 | orchestrator | 2025-09-23 07:44:54 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:44:54.969796 | orchestrator | 2025-09-23 07:44:54 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:44:54.971890 | orchestrator | 2025-09-23 07:44:54 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:44:54.971917 | orchestrator | 2025-09-23 07:44:54 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:44:58.019205 | orchestrator | 2025-09-23 07:44:58 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:44:58.021150 | orchestrator | 2025-09-23 07:44:58 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:44:58.022342 | orchestrator | 2025-09-23 07:44:58 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:44:58.022380 | orchestrator | 2025-09-23 07:44:58 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:45:01.069220 | orchestrator | 2025-09-23 07:45:01 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:45:01.070737 | orchestrator | 2025-09-23 07:45:01 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:45:01.073874 | orchestrator | 2025-09-23 07:45:01 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:45:01.073974 | orchestrator | 2025-09-23 07:45:01 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:45:04.122703 | orchestrator | 2025-09-23 07:45:04 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:45:04.124763 | orchestrator | 2025-09-23 07:45:04 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:45:04.126653 | orchestrator | 2025-09-23 07:45:04 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:45:04.126890 | orchestrator | 2025-09-23 07:45:04 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:45:07.166003 | orchestrator | 2025-09-23 07:45:07 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:45:07.167266 | orchestrator | 2025-09-23 07:45:07 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:45:07.169235 | orchestrator | 2025-09-23 07:45:07 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:45:07.169322 | orchestrator | 2025-09-23 07:45:07 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:45:10.214850 | orchestrator | 2025-09-23 07:45:10 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:45:10.215857 | orchestrator | 2025-09-23 07:45:10 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:45:10.218132 | orchestrator | 2025-09-23 07:45:10 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:45:10.218160 | orchestrator | 2025-09-23 07:45:10 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:45:13.262314 | orchestrator | 2025-09-23 07:45:13 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:45:13.264213 | orchestrator | 2025-09-23 07:45:13 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:45:13.266326 | orchestrator | 2025-09-23 07:45:13 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:45:13.266457 | orchestrator | 2025-09-23 07:45:13 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:45:16.314764 | orchestrator | 2025-09-23 07:45:16 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:45:16.315777 | orchestrator | 2025-09-23 07:45:16 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:45:16.318012 | orchestrator | 2025-09-23 07:45:16 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:45:16.318230 | orchestrator | 2025-09-23 07:45:16 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:45:19.373854 | orchestrator | 2025-09-23 07:45:19 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:45:19.376713 | orchestrator | 2025-09-23 07:45:19 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:45:19.378507 | orchestrator | 2025-09-23 07:45:19 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:45:19.378790 | orchestrator | 2025-09-23 07:45:19 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:45:22.436500 | orchestrator | 2025-09-23 07:45:22 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:45:22.436945 | orchestrator | 2025-09-23 07:45:22 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:45:22.437961 | orchestrator | 2025-09-23 07:45:22 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:45:22.437993 | orchestrator | 2025-09-23 07:45:22 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:45:25.483344 | orchestrator | 2025-09-23 07:45:25 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state STARTED 2025-09-23 07:45:25.485522 | orchestrator | 2025-09-23 07:45:25 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:45:25.487669 | orchestrator | 2025-09-23 07:45:25 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:45:25.488175 | orchestrator | 2025-09-23 07:45:25 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:45:28.549645 | orchestrator | 2025-09-23 07:45:28 | INFO  | Task f72aee1d-21af-4c84-b9a5-d72d0ed14175 is in state SUCCESS 2025-09-23 07:45:28.551465 | orchestrator | 2025-09-23 07:45:28.551508 | orchestrator | 2025-09-23 07:45:28.551550 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-09-23 07:45:28.551564 | orchestrator | 2025-09-23 07:45:28.551576 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-23 07:45:28.551587 | orchestrator | Tuesday 23 September 2025 07:34:32 +0000 (0:00:00.700) 0:00:00.700 ***** 2025-09-23 07:45:28.551600 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:45:28.551612 | orchestrator | 2025-09-23 07:45:28.551623 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-23 07:45:28.551634 | orchestrator | Tuesday 23 September 2025 07:34:33 +0000 (0:00:01.017) 0:00:01.718 ***** 2025-09-23 07:45:28.551661 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.551674 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.551685 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.551696 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.551707 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.551717 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.551867 | orchestrator | 2025-09-23 07:45:28.551893 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-23 07:45:28.551929 | orchestrator | Tuesday 23 September 2025 07:34:35 +0000 (0:00:01.619) 0:00:03.338 ***** 2025-09-23 07:45:28.551942 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.551954 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.551966 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.551978 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.551990 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.552002 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.552014 | orchestrator | 2025-09-23 07:45:28.552026 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-23 07:45:28.552039 | orchestrator | Tuesday 23 September 2025 07:34:36 +0000 (0:00:01.104) 0:00:04.442 ***** 2025-09-23 07:45:28.552051 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.552063 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.552075 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.552087 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.552099 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.552111 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.552123 | orchestrator | 2025-09-23 07:45:28.552136 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-23 07:45:28.552148 | orchestrator | Tuesday 23 September 2025 07:34:37 +0000 (0:00:01.064) 0:00:05.507 ***** 2025-09-23 07:45:28.552160 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.552173 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.552185 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.552198 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.552210 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.552222 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.552234 | orchestrator | 2025-09-23 07:45:28.552244 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-23 07:45:28.552255 | orchestrator | Tuesday 23 September 2025 07:34:38 +0000 (0:00:00.848) 0:00:06.355 ***** 2025-09-23 07:45:28.552317 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.552328 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.552339 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.552349 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.552360 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.552396 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.552406 | orchestrator | 2025-09-23 07:45:28.552431 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-23 07:45:28.552443 | orchestrator | Tuesday 23 September 2025 07:34:39 +0000 (0:00:00.679) 0:00:07.035 ***** 2025-09-23 07:45:28.552453 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.552464 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.552474 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.552485 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.552543 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.552556 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.552567 | orchestrator | 2025-09-23 07:45:28.552578 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-23 07:45:28.552589 | orchestrator | Tuesday 23 September 2025 07:34:41 +0000 (0:00:02.043) 0:00:09.078 ***** 2025-09-23 07:45:28.552600 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.552611 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.552622 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.552633 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.552643 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.552654 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.552664 | orchestrator | 2025-09-23 07:45:28.552675 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-23 07:45:28.552685 | orchestrator | Tuesday 23 September 2025 07:34:42 +0000 (0:00:00.989) 0:00:10.068 ***** 2025-09-23 07:45:28.552696 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.552706 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.552717 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.552727 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.552748 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.552758 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.552769 | orchestrator | 2025-09-23 07:45:28.552779 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-23 07:45:28.552790 | orchestrator | Tuesday 23 September 2025 07:34:43 +0000 (0:00:00.940) 0:00:11.009 ***** 2025-09-23 07:45:28.552800 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-23 07:45:28.552811 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-23 07:45:28.552821 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-23 07:45:28.552832 | orchestrator | 2025-09-23 07:45:28.552842 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-23 07:45:28.552853 | orchestrator | Tuesday 23 September 2025 07:34:43 +0000 (0:00:00.788) 0:00:11.797 ***** 2025-09-23 07:45:28.552863 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.552874 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.552885 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.552895 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.552905 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.552915 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.552926 | orchestrator | 2025-09-23 07:45:28.552950 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-23 07:45:28.552962 | orchestrator | Tuesday 23 September 2025 07:34:45 +0000 (0:00:01.096) 0:00:12.894 ***** 2025-09-23 07:45:28.552972 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-23 07:45:28.552983 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-23 07:45:28.552994 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-23 07:45:28.553004 | orchestrator | 2025-09-23 07:45:28.553015 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-23 07:45:28.553025 | orchestrator | Tuesday 23 September 2025 07:34:48 +0000 (0:00:03.853) 0:00:16.748 ***** 2025-09-23 07:45:28.553036 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-23 07:45:28.553046 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-23 07:45:28.553057 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-23 07:45:28.553067 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.553078 | orchestrator | 2025-09-23 07:45:28.553088 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-23 07:45:28.553099 | orchestrator | Tuesday 23 September 2025 07:34:49 +0000 (0:00:00.875) 0:00:17.624 ***** 2025-09-23 07:45:28.553112 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.553126 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.553136 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.553147 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.553158 | orchestrator | 2025-09-23 07:45:28.553169 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-23 07:45:28.553179 | orchestrator | Tuesday 23 September 2025 07:34:50 +0000 (0:00:00.732) 0:00:18.356 ***** 2025-09-23 07:45:28.553192 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.553339 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.553353 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.553379 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.553391 | orchestrator | 2025-09-23 07:45:28.553402 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-23 07:45:28.553412 | orchestrator | Tuesday 23 September 2025 07:34:50 +0000 (0:00:00.212) 0:00:18.568 ***** 2025-09-23 07:45:28.553433 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-23 07:34:45.778674', 'end': '2025-09-23 07:34:46.164479', 'delta': '0:00:00.385805', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.553448 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-23 07:34:47.030507', 'end': '2025-09-23 07:34:47.487178', 'delta': '0:00:00.456671', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.553548 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-23 07:34:48.074325', 'end': '2025-09-23 07:34:48.430161', 'delta': '0:00:00.355836', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.553573 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.553584 | orchestrator | 2025-09-23 07:45:28.553595 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-23 07:45:28.553606 | orchestrator | Tuesday 23 September 2025 07:34:51 +0000 (0:00:00.380) 0:00:18.949 ***** 2025-09-23 07:45:28.553625 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.553636 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.553647 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.553658 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.553668 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.553679 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.553689 | orchestrator | 2025-09-23 07:45:28.553700 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-23 07:45:28.553711 | orchestrator | Tuesday 23 September 2025 07:34:53 +0000 (0:00:02.712) 0:00:21.661 ***** 2025-09-23 07:45:28.553722 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-23 07:45:28.553733 | orchestrator | 2025-09-23 07:45:28.553743 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-23 07:45:28.553759 | orchestrator | Tuesday 23 September 2025 07:34:54 +0000 (0:00:00.703) 0:00:22.365 ***** 2025-09-23 07:45:28.553770 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.553781 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.553791 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.553802 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.553813 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.553823 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.553869 | orchestrator | 2025-09-23 07:45:28.553880 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-23 07:45:28.553891 | orchestrator | Tuesday 23 September 2025 07:34:55 +0000 (0:00:01.286) 0:00:23.651 ***** 2025-09-23 07:45:28.553901 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.553912 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.553922 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.553933 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.553943 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.553954 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.553964 | orchestrator | 2025-09-23 07:45:28.553975 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-23 07:45:28.553985 | orchestrator | Tuesday 23 September 2025 07:34:56 +0000 (0:00:01.051) 0:00:24.702 ***** 2025-09-23 07:45:28.553996 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.554006 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.554086 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.554102 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.554112 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.554123 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.554133 | orchestrator | 2025-09-23 07:45:28.554144 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-23 07:45:28.554155 | orchestrator | Tuesday 23 September 2025 07:34:57 +0000 (0:00:00.860) 0:00:25.563 ***** 2025-09-23 07:45:28.554165 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.554176 | orchestrator | 2025-09-23 07:45:28.554297 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-23 07:45:28.554310 | orchestrator | Tuesday 23 September 2025 07:34:57 +0000 (0:00:00.151) 0:00:25.714 ***** 2025-09-23 07:45:28.554321 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.554332 | orchestrator | 2025-09-23 07:45:28.554342 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-23 07:45:28.554353 | orchestrator | Tuesday 23 September 2025 07:34:58 +0000 (0:00:00.399) 0:00:26.113 ***** 2025-09-23 07:45:28.554393 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.554405 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.554416 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.554426 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.554437 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.554447 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.554458 | orchestrator | 2025-09-23 07:45:28.554480 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-23 07:45:28.554500 | orchestrator | Tuesday 23 September 2025 07:34:58 +0000 (0:00:00.708) 0:00:26.822 ***** 2025-09-23 07:45:28.554511 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.554522 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.554532 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.554543 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.554553 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.554564 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.554575 | orchestrator | 2025-09-23 07:45:28.554585 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-23 07:45:28.554596 | orchestrator | Tuesday 23 September 2025 07:35:00 +0000 (0:00:01.044) 0:00:27.866 ***** 2025-09-23 07:45:28.554606 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.554617 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.554627 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.554638 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.554649 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.554659 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.554669 | orchestrator | 2025-09-23 07:45:28.554680 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-23 07:45:28.554690 | orchestrator | Tuesday 23 September 2025 07:35:00 +0000 (0:00:00.627) 0:00:28.494 ***** 2025-09-23 07:45:28.554701 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.554712 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.554722 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.554733 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.554743 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.554754 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.554764 | orchestrator | 2025-09-23 07:45:28.554775 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-23 07:45:28.554785 | orchestrator | Tuesday 23 September 2025 07:35:01 +0000 (0:00:00.755) 0:00:29.250 ***** 2025-09-23 07:45:28.554796 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.554806 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.554817 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.554828 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.554838 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.554848 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.554893 | orchestrator | 2025-09-23 07:45:28.554904 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-23 07:45:28.554915 | orchestrator | Tuesday 23 September 2025 07:35:02 +0000 (0:00:00.735) 0:00:29.986 ***** 2025-09-23 07:45:28.554925 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.554936 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.554947 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.554957 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.554968 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.554978 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.554989 | orchestrator | 2025-09-23 07:45:28.555000 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-23 07:45:28.555011 | orchestrator | Tuesday 23 September 2025 07:35:03 +0000 (0:00:00.886) 0:00:30.873 ***** 2025-09-23 07:45:28.555022 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.555033 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.555049 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.555060 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.555070 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.555081 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.555091 | orchestrator | 2025-09-23 07:45:28.555102 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-23 07:45:28.555113 | orchestrator | Tuesday 23 September 2025 07:35:03 +0000 (0:00:00.794) 0:00:31.667 ***** 2025-09-23 07:45:28.555125 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fa3e03eb--2d2a--5719--835a--39fedcc9009f-osd--block--fa3e03eb--2d2a--5719--835a--39fedcc9009f', 'dm-uuid-LVM-FHNXkK9ifNZQ8LWRnVtzawWUcnaTHNMoPTyR0SdHm9HYDyijezVy6TPXhKueqSbf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.555143 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0570cb7e--4d0f--57ea--8b12--da850e205fc7-osd--block--0570cb7e--4d0f--57ea--8b12--da850e205fc7', 'dm-uuid-LVM-2TNdLbVMERZXZ4qd8SvwGerVO8RLuWtHtDHFMeuc0zIMJys19eeLIYKnGH02vLY1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.555162 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.555175 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.555186 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.555197 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.555208 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.555219 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.555394 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.555420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.555445 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604', 'scsi-SQEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604-part1', 'scsi-SQEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604-part14', 'scsi-SQEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604-part15', 'scsi-SQEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604-part16', 'scsi-SQEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-23 07:45:28.555461 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--fa3e03eb--2d2a--5719--835a--39fedcc9009f-osd--block--fa3e03eb--2d2a--5719--835a--39fedcc9009f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fM7ljo-Z5R8-Q0ef-6yah-KEG0-a75U-AGeTru', 'scsi-0QEMU_QEMU_HARDDISK_c90ab8a7-6741-4b53-9264-08db4b9d41dd', 'scsi-SQEMU_QEMU_HARDDISK_c90ab8a7-6741-4b53-9264-08db4b9d41dd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-23 07:45:28.555478 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--0570cb7e--4d0f--57ea--8b12--da850e205fc7-osd--block--0570cb7e--4d0f--57ea--8b12--da850e205fc7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-D6rneW-lget-KtMe-Abei-G9R2-y4e5-RfJi6o', 'scsi-0QEMU_QEMU_HARDDISK_59088487-bcaf-4b18-9006-b2b85c395676', 'scsi-SQEMU_QEMU_HARDDISK_59088487-bcaf-4b18-9006-b2b85c395676'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-23 07:45:28.555498 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c71f819-4704-4446-9599-7b21db8e3013', 'scsi-SQEMU_QEMU_HARDDISK_7c71f819-4704-4446-9599-7b21db8e3013'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-23 07:45:28.555510 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-23-06-52-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-23 07:45:28.555530 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7ede7e8c--1177--5738--bf30--f710eefa62dc-osd--block--7ede7e8c--1177--5738--bf30--f710eefa62dc', 'dm-uuid-LVM-NgZG5ji7HfB8IV2bPu8OBaDhXaBEH7UTodp14LDJ4eKUh9n0XoDCobhqh5FDEw3z'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.555542 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6b345e42--d385--5c5d--ac31--471707d336a3-osd--block--6b345e42--d385--5c5d--ac31--471707d336a3', 'dm-uuid-LVM-mU61CxIWGB9jUTaZh0QsW622LIJSrNWwZCkPiiwyQdUqROx0Bq2Hs4DONnWa7GAS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.555553 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.555564 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.555575 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.555598 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.555609 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.555620 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.555631 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.555643 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.555661 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.555679 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7', 'scsi-SQEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7-part1', 'scsi-SQEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7-part14', 'scsi-SQEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7-part15', 'scsi-SQEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7-part16', 'scsi-SQEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-23 07:45:28.555699 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7ede7e8c--1177--5738--bf30--f710eefa62dc-osd--block--7ede7e8c--1177--5738--bf30--f710eefa62dc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xWpyYw-F1EM-syGU-5CgF-O2Pl-ep3M-c1Skla', 'scsi-0QEMU_QEMU_HARDDISK_0bff4510-9eaf-4f53-bf1a-5cee4a2246ec', 'scsi-SQEMU_QEMU_HARDDISK_0bff4510-9eaf-4f53-bf1a-5cee4a2246ec'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-23 07:45:28.555711 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6b345e42--d385--5c5d--ac31--471707d336a3-osd--block--6b345e42--d385--5c5d--ac31--471707d336a3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-crt05h-mBkl-dd9g-xK1l-c3FO-Ip8Q-BJ18xz', 'scsi-0QEMU_QEMU_HARDDISK_fd6a0863-0d42-4019-9e23-eb994da62dbd', 'scsi-SQEMU_QEMU_HARDDISK_fd6a0863-0d42-4019-9e23-eb994da62dbd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-23 07:45:28.555729 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_87ebb364-ac90-40d8-a46a-ebfab3ab7b91', 'scsi-SQEMU_QEMU_HARDDISK_87ebb364-ac90-40d8-a46a-ebfab3ab7b91'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-23 07:45:28.555742 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-23-06-52-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-23 07:45:28.555753 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4a27826e--7697--5dae--8bcf--65313ee63b58-osd--block--4a27826e--7697--5dae--8bcf--65313ee63b58', 'dm-uuid-LVM-6ficvLhRpdNC4bqCip3odIJa81AcAI17S3rd6t4DcCeiq1oknBitZJNhGfd7TN5u'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.555779 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b31a677e--efd4--57fc--b4ad--0e2207d5fa48-osd--block--b31a677e--efd4--57fc--b4ad--0e2207d5fa48', 'dm-uuid-LVM-e1XWlmUNqKg5peDV3v4Azb7L4vfb5JWGcwIpmZeqpT0ODLsARXlZJISNgmu0cQSb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.555791 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.555802 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.555813 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.555931 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.555959 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.555971 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.555982 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.555992 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.556003 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.556030 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269', 'scsi-SQEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269-part1', 'scsi-SQEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269-part14', 'scsi-SQEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269-part15', 'scsi-SQEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269-part16', 'scsi-SQEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-23 07:45:28.556049 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4a27826e--7697--5dae--8bcf--65313ee63b58-osd--block--4a27826e--7697--5dae--8bcf--65313ee63b58'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3i9NQd-zCuN-te3J-sjJW-E1KT-pOAG-TIscye', 'scsi-0QEMU_QEMU_HARDDISK_5c88e186-44c4-4f29-a716-3e862e71c173', 'scsi-SQEMU_QEMU_HARDDISK_5c88e186-44c4-4f29-a716-3e862e71c173'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-23 07:45:28.556061 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b31a677e--efd4--57fc--b4ad--0e2207d5fa48-osd--block--b31a677e--efd4--57fc--b4ad--0e2207d5fa48'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VzVui3-jDRW-PDPs-G4T4-m0ml-2P0A-V3kUfU', 'scsi-0QEMU_QEMU_HARDDISK_b75d5c1f-0301-4e14-8d60-793226b090b6', 'scsi-SQEMU_QEMU_HARDDISK_b75d5c1f-0301-4e14-8d60-793226b090b6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-23 07:45:28.556073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.556092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.556107 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c2ff2f17-feac-486a-a8d3-f5343e47e8fb', 'scsi-SQEMU_QEMU_HARDDISK_c2ff2f17-feac-486a-a8d3-f5343e47e8fb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-23 07:45:28.556119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.556130 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-23-06-52-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-23 07:45:28.556141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.556161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.556172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.556183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.556201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.556217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ba3c17a-eb80-4948-8dbf-766c30daa51c', 'scsi-SQEMU_QEMU_HARDDISK_8ba3c17a-eb80-4948-8dbf-766c30daa51c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ba3c17a-eb80-4948-8dbf-766c30daa51c-part1', 'scsi-SQEMU_QEMU_HARDDISK_8ba3c17a-eb80-4948-8dbf-766c30daa51c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ba3c17a-eb80-4948-8dbf-766c30daa51c-part14', 'scsi-SQEMU_QEMU_HARDDISK_8ba3c17a-eb80-4948-8dbf-766c30daa51c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ba3c17a-eb80-4948-8dbf-766c30daa51c-part15', 'scsi-SQEMU_QEMU_HARDDISK_8ba3c17a-eb80-4948-8dbf-766c30daa51c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ba3c17a-eb80-4948-8dbf-766c30daa51c-part16', 'scsi-SQEMU_QEMU_HARDDISK_8ba3c17a-eb80-4948-8dbf-766c30daa51c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-23 07:45:28.556236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-23-06-52-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-23 07:45:28.556248 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.556259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.556270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.556288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.556386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.556404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.556415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.556426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.556437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.556458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c666169e-f4a3-4a18-863e-3a2fdc794692', 'scsi-SQEMU_QEMU_HARDDISK_c666169e-f4a3-4a18-863e-3a2fdc794692'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c666169e-f4a3-4a18-863e-3a2fdc794692-part1', 'scsi-SQEMU_QEMU_HARDDISK_c666169e-f4a3-4a18-863e-3a2fdc794692-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c666169e-f4a3-4a18-863e-3a2fdc794692-part14', 'scsi-SQEMU_QEMU_HARDDISK_c666169e-f4a3-4a18-863e-3a2fdc794692-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c666169e-f4a3-4a18-863e-3a2fdc794692-part15', 'scsi-SQEMU_QEMU_HARDDISK_c666169e-f4a3-4a18-863e-3a2fdc794692-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c666169e-f4a3-4a18-863e-3a2fdc794692-part16', 'scsi-SQEMU_QEMU_HARDDISK_c666169e-f4a3-4a18-863e-3a2fdc794692-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-23 07:45:28.556479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-23-06-52-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-23 07:45:28.556490 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.556506 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.556517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.556528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.556539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.556550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.556569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.556581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.556598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.556609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:45:28.556626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aab1398b-33d3-432d-85d3-6da114cbf6bf', 'scsi-SQEMU_QEMU_HARDDISK_aab1398b-33d3-432d-85d3-6da114cbf6bf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aab1398b-33d3-432d-85d3-6da114cbf6bf-part1', 'scsi-SQEMU_QEMU_HARDDISK_aab1398b-33d3-432d-85d3-6da114cbf6bf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aab1398b-33d3-432d-85d3-6da114cbf6bf-part14', 'scsi-SQEMU_QEMU_HARDDISK_aab1398b-33d3-432d-85d3-6da114cbf6bf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aab1398b-33d3-432d-85d3-6da114cbf6bf-part15', 'scsi-SQEMU_QEMU_HARDDISK_aab1398b-33d3-432d-85d3-6da114cbf6bf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aab1398b-33d3-432d-85d3-6da114cbf6bf-part16', 'scsi-SQEMU_QEMU_HARDDISK_aab1398b-33d3-432d-85d3-6da114cbf6bf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-23 07:45:28.556646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-23-06-52-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-23 07:45:28.556658 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.556669 | orchestrator | 2025-09-23 07:45:28.556686 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-23 07:45:28.556697 | orchestrator | Tuesday 23 September 2025 07:35:05 +0000 (0:00:01.995) 0:00:33.663 ***** 2025-09-23 07:45:28.556709 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fa3e03eb--2d2a--5719--835a--39fedcc9009f-osd--block--fa3e03eb--2d2a--5719--835a--39fedcc9009f', 'dm-uuid-LVM-FHNXkK9ifNZQ8LWRnVtzawWUcnaTHNMoPTyR0SdHm9HYDyijezVy6TPXhKueqSbf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.556760 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0570cb7e--4d0f--57ea--8b12--da850e205fc7-osd--block--0570cb7e--4d0f--57ea--8b12--da850e205fc7', 'dm-uuid-LVM-2TNdLbVMERZXZ4qd8SvwGerVO8RLuWtHtDHFMeuc0zIMJys19eeLIYKnGH02vLY1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.556784 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.556795 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.556807 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.556869 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.556891 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7ede7e8c--1177--5738--bf30--f710eefa62dc-osd--block--7ede7e8c--1177--5738--bf30--f710eefa62dc', 'dm-uuid-LVM-NgZG5ji7HfB8IV2bPu8OBaDhXaBEH7UTodp14LDJ4eKUh9n0XoDCobhqh5FDEw3z'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.556903 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.556920 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6b345e42--d385--5c5d--ac31--471707d336a3-osd--block--6b345e42--d385--5c5d--ac31--471707d336a3', 'dm-uuid-LVM-mU61CxIWGB9jUTaZh0QsW622LIJSrNWwZCkPiiwyQdUqROx0Bq2Hs4DONnWa7GAS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.556931 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.556942 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.556960 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.557022 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.557035 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.557046 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.557063 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.557083 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604', 'scsi-SQEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604-part1', 'scsi-SQEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604-part14', 'scsi-SQEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604-part15', 'scsi-SQEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604-part16', 'scsi-SQEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.557105 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--fa3e03eb--2d2a--5719--835a--39fedcc9009f-osd--block--fa3e03eb--2d2a--5719--835a--39fedcc9009f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fM7ljo-Z5R8-Q0ef-6yah-KEG0-a75U-AGeTru', 'scsi-0QEMU_QEMU_HARDDISK_c90ab8a7-6741-4b53-9264-08db4b9d41dd', 'scsi-SQEMU_QEMU_HARDDISK_c90ab8a7-6741-4b53-9264-08db4b9d41dd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.557121 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.557133 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--0570cb7e--4d0f--57ea--8b12--da850e205fc7-osd--block--0570cb7e--4d0f--57ea--8b12--da850e205fc7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-D6rneW-lget-KtMe-Abei-G9R2-y4e5-RfJi6o', 'scsi-0QEMU_QEMU_HARDDISK_59088487-bcaf-4b18-9006-b2b85c395676', 'scsi-SQEMU_QEMU_HARDDISK_59088487-bcaf-4b18-9006-b2b85c395676'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.557145 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.557174 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c71f819-4704-4446-9599-7b21db8e3013', 'scsi-SQEMU_QEMU_HARDDISK_7c71f819-4704-4446-9599-7b21db8e3013'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.557186 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.557198 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-23-06-52-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.557215 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.557234 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7', 'scsi-SQEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7-part1', 'scsi-SQEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7-part14', 'scsi-SQEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7-part15', 'scsi-SQEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7-part16', 'scsi-SQEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.557254 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--7ede7e8c--1177--5738--bf30--f710eefa62dc-osd--block--7ede7e8c--1177--5738--bf30--f710eefa62dc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xWpyYw-F1EM-syGU-5CgF-O2Pl-ep3M-c1Skla', 'scsi-0QEMU_QEMU_HARDDISK_0bff4510-9eaf-4f53-bf1a-5cee4a2246ec', 'scsi-SQEMU_QEMU_HARDDISK_0bff4510-9eaf-4f53-bf1a-5cee4a2246ec'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.557286 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--6b345e42--d385--5c5d--ac31--471707d336a3-osd--block--6b345e42--d385--5c5d--ac31--471707d336a3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-crt05h-mBkl-dd9g-xK1l-c3FO-Ip8Q-BJ18xz', 'scsi-0QEMU_QEMU_HARDDISK_fd6a0863-0d42-4019-9e23-eb994da62dbd', 'scsi-SQEMU_QEMU_HARDDISK_fd6a0863-0d42-4019-9e23-eb994da62dbd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.557299 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_87ebb364-ac90-40d8-a46a-ebfab3ab7b91', 'scsi-SQEMU_QEMU_HARDDISK_87ebb364-ac90-40d8-a46a-ebfab3ab7b91'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558056 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-23-06-52-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558103 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.558115 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4a27826e--7697--5dae--8bcf--65313ee63b58-osd--block--4a27826e--7697--5dae--8bcf--65313ee63b58', 'dm-uuid-LVM-6ficvLhRpdNC4bqCip3odIJa81AcAI17S3rd6t4DcCeiq1oknBitZJNhGfd7TN5u'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558126 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558143 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558154 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b31a677e--efd4--57fc--b4ad--0e2207d5fa48-osd--block--b31a677e--efd4--57fc--b4ad--0e2207d5fa48', 'dm-uuid-LVM-e1XWlmUNqKg5peDV3v4Azb7L4vfb5JWGcwIpmZeqpT0ODLsARXlZJISNgmu0cQSb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558164 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558192 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558203 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558213 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558222 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558236 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558247 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558267 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558277 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.558296 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ba3c17a-eb80-4948-8dbf-766c30daa51c', 'scsi-SQEMU_QEMU_HARDDISK_8ba3c17a-eb80-4948-8dbf-766c30daa51c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ba3c17a-eb80-4948-8dbf-766c30daa51c-part1', 'scsi-SQEMU_QEMU_HARDDISK_8ba3c17a-eb80-4948-8dbf-766c30daa51c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ba3c17a-eb80-4948-8dbf-766c30daa51c-part14', 'scsi-SQEMU_QEMU_HARDDISK_8ba3c17a-eb80-4948-8dbf-766c30daa51c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ba3c17a-eb80-4948-8dbf-766c30daa51c-part15', 'scsi-SQEMU_QEMU_HARDDISK_8ba3c17a-eb80-4948-8dbf-766c30daa51c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ba3c17a-eb80-4948-8dbf-766c30daa51c-part16', 'scsi-SQEMU_QEMU_HARDDISK_8ba3c17a-eb80-4948-8dbf-766c30daa51c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558312 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558323 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-23-06-52-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558344 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558355 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558421 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558433 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558448 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558467 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269', 'scsi-SQEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269-part1', 'scsi-SQEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269-part14', 'scsi-SQEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269-part15', 'scsi-SQEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269-part16', 'scsi-SQEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558577 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558596 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4a27826e--7697--5dae--8bcf--65313ee63b58-osd--block--4a27826e--7697--5dae--8bcf--65313ee63b58'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3i9NQd-zCuN-te3J-sjJW-E1KT-pOAG-TIscye', 'scsi-0QEMU_QEMU_HARDDISK_5c88e186-44c4-4f29-a716-3e862e71c173', 'scsi-SQEMU_QEMU_HARDDISK_5c88e186-44c4-4f29-a716-3e862e71c173'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558607 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558623 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b31a677e--efd4--57fc--b4ad--0e2207d5fa48-osd--block--b31a677e--efd4--57fc--b4ad--0e2207d5fa48'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VzVui3-jDRW-PDPs-G4T4-m0ml-2P0A-V3kUfU', 'scsi-0QEMU_QEMU_HARDDISK_b75d5c1f-0301-4e14-8d60-793226b090b6', 'scsi-SQEMU_QEMU_HARDDISK_b75d5c1f-0301-4e14-8d60-793226b090b6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558642 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558654 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558665 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c2ff2f17-feac-486a-a8d3-f5343e47e8fb', 'scsi-SQEMU_QEMU_HARDDISK_c2ff2f17-feac-486a-a8d3-f5343e47e8fb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558681 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558698 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558710 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-23-06-52-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558728 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558740 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558758 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c666169e-f4a3-4a18-863e-3a2fdc794692', 'scsi-SQEMU_QEMU_HARDDISK_c666169e-f4a3-4a18-863e-3a2fdc794692'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c666169e-f4a3-4a18-863e-3a2fdc794692-part1', 'scsi-SQEMU_QEMU_HARDDISK_c666169e-f4a3-4a18-863e-3a2fdc794692-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c666169e-f4a3-4a18-863e-3a2fdc794692-part14', 'scsi-SQEMU_QEMU_HARDDISK_c666169e-f4a3-4a18-863e-3a2fdc794692-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c666169e-f4a3-4a18-863e-3a2fdc794692-part15', 'scsi-SQEMU_QEMU_HARDDISK_c666169e-f4a3-4a18-863e-3a2fdc794692-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c666169e-f4a3-4a18-863e-3a2fdc794692-part16', 'scsi-SQEMU_QEMU_HARDDISK_c666169e-f4a3-4a18-863e-3a2fdc794692-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558777 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-23-06-52-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558788 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.558800 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.558811 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.558829 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558841 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558866 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558893 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558908 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558917 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558931 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558941 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558956 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aab1398b-33d3-432d-85d3-6da114cbf6bf', 'scsi-SQEMU_QEMU_HARDDISK_aab1398b-33d3-432d-85d3-6da114cbf6bf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aab1398b-33d3-432d-85d3-6da114cbf6bf-part1', 'scsi-SQEMU_QEMU_HARDDISK_aab1398b-33d3-432d-85d3-6da114cbf6bf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aab1398b-33d3-432d-85d3-6da114cbf6bf-part14', 'scsi-SQEMU_QEMU_HARDDISK_aab1398b-33d3-432d-85d3-6da114cbf6bf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aab1398b-33d3-432d-85d3-6da114cbf6bf-part15', 'scsi-SQEMU_QEMU_HARDDISK_aab1398b-33d3-432d-85d3-6da114cbf6bf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aab1398b-33d3-432d-85d3-6da114cbf6bf-part16', 'scsi-SQEMU_QEMU_HARDDISK_aab1398b-33d3-432d-85d3-6da114cbf6bf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558971 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-23-06-52-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:45:28.558980 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.558990 | orchestrator | 2025-09-23 07:45:28.558999 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-23 07:45:28.559008 | orchestrator | Tuesday 23 September 2025 07:35:07 +0000 (0:00:01.456) 0:00:35.119 ***** 2025-09-23 07:45:28.559020 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.559029 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.559036 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.559044 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.559052 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.559060 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.559067 | orchestrator | 2025-09-23 07:45:28.559075 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-23 07:45:28.559083 | orchestrator | Tuesday 23 September 2025 07:35:08 +0000 (0:00:01.247) 0:00:36.367 ***** 2025-09-23 07:45:28.559091 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.559098 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.559106 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.559113 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.559121 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.559129 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.559136 | orchestrator | 2025-09-23 07:45:28.559144 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-23 07:45:28.559152 | orchestrator | Tuesday 23 September 2025 07:35:09 +0000 (0:00:01.121) 0:00:37.488 ***** 2025-09-23 07:45:28.559159 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.559167 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.559175 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.559182 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.559190 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.559198 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.559206 | orchestrator | 2025-09-23 07:45:28.559213 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-23 07:45:28.559228 | orchestrator | Tuesday 23 September 2025 07:35:10 +0000 (0:00:00.855) 0:00:38.344 ***** 2025-09-23 07:45:28.559236 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.559244 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.559252 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.559259 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.559267 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.559274 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.559282 | orchestrator | 2025-09-23 07:45:28.559290 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-23 07:45:28.559298 | orchestrator | Tuesday 23 September 2025 07:35:11 +0000 (0:00:00.682) 0:00:39.027 ***** 2025-09-23 07:45:28.559305 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.559313 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.559321 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.559328 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.559336 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.559344 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.559351 | orchestrator | 2025-09-23 07:45:28.559359 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-23 07:45:28.559381 | orchestrator | Tuesday 23 September 2025 07:35:12 +0000 (0:00:01.280) 0:00:40.307 ***** 2025-09-23 07:45:28.559389 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.559397 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.559404 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.559412 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.559419 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.559427 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.559435 | orchestrator | 2025-09-23 07:45:28.559442 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-23 07:45:28.559454 | orchestrator | Tuesday 23 September 2025 07:35:13 +0000 (0:00:00.750) 0:00:41.058 ***** 2025-09-23 07:45:28.559464 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-23 07:45:28.559476 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-23 07:45:28.559490 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-23 07:45:28.559502 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-23 07:45:28.559517 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-23 07:45:28.559529 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-23 07:45:28.559541 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-23 07:45:28.559554 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-23 07:45:28.559568 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-23 07:45:28.559576 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-09-23 07:45:28.559584 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-23 07:45:28.559591 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-23 07:45:28.559599 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-23 07:45:28.559606 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-09-23 07:45:28.559614 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-09-23 07:45:28.559622 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-09-23 07:45:28.559629 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-09-23 07:45:28.559637 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-09-23 07:45:28.559645 | orchestrator | 2025-09-23 07:45:28.559653 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-23 07:45:28.559660 | orchestrator | Tuesday 23 September 2025 07:35:16 +0000 (0:00:03.152) 0:00:44.210 ***** 2025-09-23 07:45:28.559668 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-23 07:45:28.559676 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-23 07:45:28.559690 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-23 07:45:28.559698 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.559706 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-23 07:45:28.559713 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-23 07:45:28.559721 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-23 07:45:28.559729 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-23 07:45:28.559736 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-23 07:45:28.559744 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-23 07:45:28.559757 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.559765 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-23 07:45:28.559773 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-23 07:45:28.559781 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-23 07:45:28.559789 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.559796 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-23 07:45:28.559804 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-23 07:45:28.559812 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-23 07:45:28.559819 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.559827 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.559835 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-23 07:45:28.559843 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-23 07:45:28.559850 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-23 07:45:28.559858 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.559866 | orchestrator | 2025-09-23 07:45:28.559874 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-23 07:45:28.559882 | orchestrator | Tuesday 23 September 2025 07:35:17 +0000 (0:00:01.143) 0:00:45.353 ***** 2025-09-23 07:45:28.559889 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.559897 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.559905 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.559914 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:45:28.559922 | orchestrator | 2025-09-23 07:45:28.559931 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-23 07:45:28.559939 | orchestrator | Tuesday 23 September 2025 07:35:18 +0000 (0:00:00.967) 0:00:46.321 ***** 2025-09-23 07:45:28.559947 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.559955 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.559963 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.559970 | orchestrator | 2025-09-23 07:45:28.559978 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-23 07:45:28.559986 | orchestrator | Tuesday 23 September 2025 07:35:18 +0000 (0:00:00.319) 0:00:46.641 ***** 2025-09-23 07:45:28.559994 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.560001 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.560009 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.560017 | orchestrator | 2025-09-23 07:45:28.560025 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-23 07:45:28.560033 | orchestrator | Tuesday 23 September 2025 07:35:19 +0000 (0:00:00.332) 0:00:46.973 ***** 2025-09-23 07:45:28.560040 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.560048 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.560056 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.560063 | orchestrator | 2025-09-23 07:45:28.560071 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-23 07:45:28.560079 | orchestrator | Tuesday 23 September 2025 07:35:19 +0000 (0:00:00.285) 0:00:47.258 ***** 2025-09-23 07:45:28.560098 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.560107 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.560115 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.560122 | orchestrator | 2025-09-23 07:45:28.560130 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-23 07:45:28.560138 | orchestrator | Tuesday 23 September 2025 07:35:20 +0000 (0:00:00.593) 0:00:47.852 ***** 2025-09-23 07:45:28.560146 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-23 07:45:28.560153 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-23 07:45:28.560161 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-23 07:45:28.560169 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.560176 | orchestrator | 2025-09-23 07:45:28.560184 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-23 07:45:28.560192 | orchestrator | Tuesday 23 September 2025 07:35:20 +0000 (0:00:00.306) 0:00:48.158 ***** 2025-09-23 07:45:28.560200 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-23 07:45:28.560207 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-23 07:45:28.560215 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-23 07:45:28.560223 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.560231 | orchestrator | 2025-09-23 07:45:28.560238 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-23 07:45:28.560246 | orchestrator | Tuesday 23 September 2025 07:35:20 +0000 (0:00:00.356) 0:00:48.515 ***** 2025-09-23 07:45:28.560254 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-23 07:45:28.560261 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-23 07:45:28.560269 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-23 07:45:28.560277 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.560285 | orchestrator | 2025-09-23 07:45:28.560292 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-23 07:45:28.560300 | orchestrator | Tuesday 23 September 2025 07:35:21 +0000 (0:00:00.377) 0:00:48.893 ***** 2025-09-23 07:45:28.560308 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.560316 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.560323 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.560331 | orchestrator | 2025-09-23 07:45:28.560339 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-23 07:45:28.560347 | orchestrator | Tuesday 23 September 2025 07:35:21 +0000 (0:00:00.411) 0:00:49.304 ***** 2025-09-23 07:45:28.560355 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-23 07:45:28.560378 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-23 07:45:28.560386 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-23 07:45:28.560394 | orchestrator | 2025-09-23 07:45:28.560406 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-23 07:45:28.560414 | orchestrator | Tuesday 23 September 2025 07:35:22 +0000 (0:00:00.955) 0:00:50.259 ***** 2025-09-23 07:45:28.560422 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-23 07:45:28.560430 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-23 07:45:28.560438 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-23 07:45:28.560446 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-23 07:45:28.560453 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-23 07:45:28.560461 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-23 07:45:28.560469 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-23 07:45:28.560477 | orchestrator | 2025-09-23 07:45:28.560484 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-23 07:45:28.560498 | orchestrator | Tuesday 23 September 2025 07:35:23 +0000 (0:00:01.433) 0:00:51.693 ***** 2025-09-23 07:45:28.560506 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-23 07:45:28.560513 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-23 07:45:28.560521 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-23 07:45:28.560528 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-23 07:45:28.560536 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-23 07:45:28.560544 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-23 07:45:28.560551 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-23 07:45:28.560559 | orchestrator | 2025-09-23 07:45:28.560567 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-23 07:45:28.560575 | orchestrator | Tuesday 23 September 2025 07:35:26 +0000 (0:00:02.441) 0:00:54.135 ***** 2025-09-23 07:45:28.560583 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:45:28.560591 | orchestrator | 2025-09-23 07:45:28.560599 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-23 07:45:28.560606 | orchestrator | Tuesday 23 September 2025 07:35:27 +0000 (0:00:01.173) 0:00:55.309 ***** 2025-09-23 07:45:28.560614 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:45:28.560622 | orchestrator | 2025-09-23 07:45:28.560634 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-23 07:45:28.560642 | orchestrator | Tuesday 23 September 2025 07:35:28 +0000 (0:00:01.413) 0:00:56.722 ***** 2025-09-23 07:45:28.560649 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.560657 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.560665 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.560672 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.560680 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.560687 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.560717 | orchestrator | 2025-09-23 07:45:28.560726 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-23 07:45:28.560733 | orchestrator | Tuesday 23 September 2025 07:35:30 +0000 (0:00:01.435) 0:00:58.157 ***** 2025-09-23 07:45:28.560741 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.560749 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.560757 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.560764 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.560772 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.560780 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.560787 | orchestrator | 2025-09-23 07:45:28.560795 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-23 07:45:28.560803 | orchestrator | Tuesday 23 September 2025 07:35:31 +0000 (0:00:01.199) 0:00:59.357 ***** 2025-09-23 07:45:28.560810 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.560818 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.560826 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.560833 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.560841 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.560849 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.560856 | orchestrator | 2025-09-23 07:45:28.560864 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-23 07:45:28.560872 | orchestrator | Tuesday 23 September 2025 07:35:32 +0000 (0:00:01.345) 0:01:00.702 ***** 2025-09-23 07:45:28.560879 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.560887 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.560900 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.560907 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.560915 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.560923 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.560930 | orchestrator | 2025-09-23 07:45:28.560938 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-23 07:45:28.560946 | orchestrator | Tuesday 23 September 2025 07:35:34 +0000 (0:00:01.239) 0:01:01.942 ***** 2025-09-23 07:45:28.560953 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.560961 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.560969 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.560977 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.560984 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.560992 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.561000 | orchestrator | 2025-09-23 07:45:28.561007 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-23 07:45:28.561019 | orchestrator | Tuesday 23 September 2025 07:35:35 +0000 (0:00:01.372) 0:01:03.314 ***** 2025-09-23 07:45:28.561027 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.561035 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.561043 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.561050 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.561058 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.561066 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.561073 | orchestrator | 2025-09-23 07:45:28.561081 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-23 07:45:28.561089 | orchestrator | Tuesday 23 September 2025 07:35:36 +0000 (0:00:00.648) 0:01:03.962 ***** 2025-09-23 07:45:28.561096 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.561104 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.561112 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.561120 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.561128 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.561135 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.561143 | orchestrator | 2025-09-23 07:45:28.561151 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-23 07:45:28.561159 | orchestrator | Tuesday 23 September 2025 07:35:36 +0000 (0:00:00.654) 0:01:04.617 ***** 2025-09-23 07:45:28.561166 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.561174 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.561182 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.561189 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.561197 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.561205 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.561212 | orchestrator | 2025-09-23 07:45:28.561220 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-23 07:45:28.561228 | orchestrator | Tuesday 23 September 2025 07:35:37 +0000 (0:00:01.134) 0:01:05.752 ***** 2025-09-23 07:45:28.561236 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.561243 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.561251 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.561259 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.561266 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.561274 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.561281 | orchestrator | 2025-09-23 07:45:28.561289 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-23 07:45:28.561297 | orchestrator | Tuesday 23 September 2025 07:35:38 +0000 (0:00:01.043) 0:01:06.795 ***** 2025-09-23 07:45:28.561305 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.561313 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.561320 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.561328 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.561335 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.561343 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.561357 | orchestrator | 2025-09-23 07:45:28.561412 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-23 07:45:28.561432 | orchestrator | Tuesday 23 September 2025 07:35:39 +0000 (0:00:00.670) 0:01:07.466 ***** 2025-09-23 07:45:28.561447 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.561459 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.561472 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.561485 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.561497 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.561510 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.561522 | orchestrator | 2025-09-23 07:45:28.561542 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-23 07:45:28.561555 | orchestrator | Tuesday 23 September 2025 07:35:40 +0000 (0:00:00.556) 0:01:08.022 ***** 2025-09-23 07:45:28.561568 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.561582 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.561594 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.561607 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.561621 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.561633 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.561647 | orchestrator | 2025-09-23 07:45:28.561656 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-23 07:45:28.561664 | orchestrator | Tuesday 23 September 2025 07:35:41 +0000 (0:00:00.879) 0:01:08.901 ***** 2025-09-23 07:45:28.561671 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.561679 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.561687 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.561695 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.561702 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.561710 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.561718 | orchestrator | 2025-09-23 07:45:28.561726 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-23 07:45:28.561734 | orchestrator | Tuesday 23 September 2025 07:35:41 +0000 (0:00:00.707) 0:01:09.608 ***** 2025-09-23 07:45:28.561741 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.561749 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.561757 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.561764 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.561772 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.561779 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.561787 | orchestrator | 2025-09-23 07:45:28.561794 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-23 07:45:28.561801 | orchestrator | Tuesday 23 September 2025 07:35:42 +0000 (0:00:00.965) 0:01:10.574 ***** 2025-09-23 07:45:28.561807 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.561814 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.561820 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.561826 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.561833 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.561839 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.561846 | orchestrator | 2025-09-23 07:45:28.561853 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-23 07:45:28.561859 | orchestrator | Tuesday 23 September 2025 07:35:43 +0000 (0:00:00.525) 0:01:11.100 ***** 2025-09-23 07:45:28.561866 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.561873 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.561879 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.561886 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.561892 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.561899 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.561905 | orchestrator | 2025-09-23 07:45:28.561918 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-23 07:45:28.561925 | orchestrator | Tuesday 23 September 2025 07:35:43 +0000 (0:00:00.681) 0:01:11.782 ***** 2025-09-23 07:45:28.561939 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.561946 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.561952 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.561959 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.561965 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.561972 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.561978 | orchestrator | 2025-09-23 07:45:28.561985 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-23 07:45:28.561991 | orchestrator | Tuesday 23 September 2025 07:35:44 +0000 (0:00:00.550) 0:01:12.332 ***** 2025-09-23 07:45:28.561998 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.562005 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.562011 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.562055 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.562063 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.562069 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.562076 | orchestrator | 2025-09-23 07:45:28.562083 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-23 07:45:28.562089 | orchestrator | Tuesday 23 September 2025 07:35:45 +0000 (0:00:00.680) 0:01:13.013 ***** 2025-09-23 07:45:28.562096 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.562103 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.562109 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.562116 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.562122 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.562128 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.562135 | orchestrator | 2025-09-23 07:45:28.562141 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-09-23 07:45:28.562148 | orchestrator | Tuesday 23 September 2025 07:35:46 +0000 (0:00:01.045) 0:01:14.059 ***** 2025-09-23 07:45:28.562155 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:45:28.562161 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:45:28.562168 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:45:28.562174 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:45:28.562181 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:45:28.562187 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:45:28.562193 | orchestrator | 2025-09-23 07:45:28.562200 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-09-23 07:45:28.562207 | orchestrator | Tuesday 23 September 2025 07:35:47 +0000 (0:00:01.319) 0:01:15.379 ***** 2025-09-23 07:45:28.562213 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:45:28.562220 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:45:28.562226 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:45:28.562233 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:45:28.562239 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:45:28.562246 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:45:28.562252 | orchestrator | 2025-09-23 07:45:28.562259 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-09-23 07:45:28.562266 | orchestrator | Tuesday 23 September 2025 07:35:49 +0000 (0:00:01.994) 0:01:17.373 ***** 2025-09-23 07:45:28.562272 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:45:28.562279 | orchestrator | 2025-09-23 07:45:28.562290 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-09-23 07:45:28.562296 | orchestrator | Tuesday 23 September 2025 07:35:50 +0000 (0:00:00.992) 0:01:18.366 ***** 2025-09-23 07:45:28.562303 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.562309 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.562316 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.562322 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.562329 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.562335 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.562342 | orchestrator | 2025-09-23 07:45:28.562348 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-09-23 07:45:28.562359 | orchestrator | Tuesday 23 September 2025 07:35:51 +0000 (0:00:00.512) 0:01:18.878 ***** 2025-09-23 07:45:28.562377 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.562384 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.562390 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.562397 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.562403 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.562410 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.562417 | orchestrator | 2025-09-23 07:45:28.562423 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-09-23 07:45:28.562430 | orchestrator | Tuesday 23 September 2025 07:35:51 +0000 (0:00:00.616) 0:01:19.495 ***** 2025-09-23 07:45:28.562436 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-23 07:45:28.562443 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-23 07:45:28.562450 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-23 07:45:28.562457 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-23 07:45:28.562469 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-23 07:45:28.562485 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-23 07:45:28.562498 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-23 07:45:28.562509 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-23 07:45:28.562521 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-23 07:45:28.562532 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-23 07:45:28.562542 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-23 07:45:28.562566 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-23 07:45:28.562573 | orchestrator | 2025-09-23 07:45:28.562580 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-09-23 07:45:28.562586 | orchestrator | Tuesday 23 September 2025 07:35:52 +0000 (0:00:01.242) 0:01:20.737 ***** 2025-09-23 07:45:28.562593 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:45:28.562599 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:45:28.562606 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:45:28.562612 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:45:28.562619 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:45:28.562625 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:45:28.562632 | orchestrator | 2025-09-23 07:45:28.562638 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-09-23 07:45:28.562644 | orchestrator | Tuesday 23 September 2025 07:35:53 +0000 (0:00:01.055) 0:01:21.793 ***** 2025-09-23 07:45:28.562651 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.562657 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.562664 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.562670 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.562677 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.562683 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.562690 | orchestrator | 2025-09-23 07:45:28.562696 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-09-23 07:45:28.562703 | orchestrator | Tuesday 23 September 2025 07:35:54 +0000 (0:00:00.570) 0:01:22.363 ***** 2025-09-23 07:45:28.562709 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.562715 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.562722 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.562728 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.562735 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.562749 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.562756 | orchestrator | 2025-09-23 07:45:28.562762 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-09-23 07:45:28.562769 | orchestrator | Tuesday 23 September 2025 07:35:55 +0000 (0:00:00.769) 0:01:23.132 ***** 2025-09-23 07:45:28.562775 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.562782 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.562788 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.562795 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.562801 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.562807 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.562814 | orchestrator | 2025-09-23 07:45:28.562820 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-09-23 07:45:28.562827 | orchestrator | Tuesday 23 September 2025 07:35:55 +0000 (0:00:00.597) 0:01:23.730 ***** 2025-09-23 07:45:28.562834 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:45:28.562840 | orchestrator | 2025-09-23 07:45:28.562847 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-09-23 07:45:28.562853 | orchestrator | Tuesday 23 September 2025 07:35:57 +0000 (0:00:01.103) 0:01:24.834 ***** 2025-09-23 07:45:28.562860 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.562866 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.562877 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.562884 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.562890 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.562897 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.562903 | orchestrator | 2025-09-23 07:45:28.562910 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-09-23 07:45:28.562916 | orchestrator | Tuesday 23 September 2025 07:36:42 +0000 (0:00:45.228) 0:02:10.062 ***** 2025-09-23 07:45:28.562923 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-23 07:45:28.562929 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-23 07:45:28.562936 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-23 07:45:28.562942 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.562949 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-23 07:45:28.562955 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-23 07:45:28.562962 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-23 07:45:28.562968 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.562975 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-23 07:45:28.562981 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-23 07:45:28.562988 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-23 07:45:28.562994 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.563001 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-23 07:45:28.563007 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-23 07:45:28.563014 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-23 07:45:28.563020 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.563027 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-23 07:45:28.563033 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-23 07:45:28.563040 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-23 07:45:28.563046 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.563061 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-23 07:45:28.563072 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-23 07:45:28.563078 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-23 07:45:28.563085 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.563091 | orchestrator | 2025-09-23 07:45:28.563098 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-09-23 07:45:28.563105 | orchestrator | Tuesday 23 September 2025 07:36:42 +0000 (0:00:00.553) 0:02:10.616 ***** 2025-09-23 07:45:28.563111 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.563118 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.563124 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.563131 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.563137 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.563144 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.563150 | orchestrator | 2025-09-23 07:45:28.563157 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-09-23 07:45:28.563163 | orchestrator | Tuesday 23 September 2025 07:36:43 +0000 (0:00:00.664) 0:02:11.281 ***** 2025-09-23 07:45:28.563170 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.563176 | orchestrator | 2025-09-23 07:45:28.563183 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-09-23 07:45:28.563189 | orchestrator | Tuesday 23 September 2025 07:36:43 +0000 (0:00:00.138) 0:02:11.419 ***** 2025-09-23 07:45:28.563196 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.563202 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.563209 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.563215 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.563222 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.563228 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.563235 | orchestrator | 2025-09-23 07:45:28.563241 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-09-23 07:45:28.563248 | orchestrator | Tuesday 23 September 2025 07:36:44 +0000 (0:00:00.543) 0:02:11.963 ***** 2025-09-23 07:45:28.563254 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.563261 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.563267 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.563274 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.563280 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.563286 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.563293 | orchestrator | 2025-09-23 07:45:28.563299 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-09-23 07:45:28.563306 | orchestrator | Tuesday 23 September 2025 07:36:44 +0000 (0:00:00.660) 0:02:12.623 ***** 2025-09-23 07:45:28.563313 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.563319 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.563326 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.563332 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.563338 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.563345 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.563351 | orchestrator | 2025-09-23 07:45:28.563358 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-09-23 07:45:28.563380 | orchestrator | Tuesday 23 September 2025 07:36:45 +0000 (0:00:00.630) 0:02:13.254 ***** 2025-09-23 07:45:28.563387 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.563393 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.563400 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.563406 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.563413 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.563419 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.563426 | orchestrator | 2025-09-23 07:45:28.563433 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-09-23 07:45:28.563444 | orchestrator | Tuesday 23 September 2025 07:36:47 +0000 (0:00:02.122) 0:02:15.377 ***** 2025-09-23 07:45:28.563451 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.563458 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.563464 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.563470 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.563477 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.563483 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.563490 | orchestrator | 2025-09-23 07:45:28.563496 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-09-23 07:45:28.563503 | orchestrator | Tuesday 23 September 2025 07:36:48 +0000 (0:00:00.701) 0:02:16.079 ***** 2025-09-23 07:45:28.563510 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:45:28.563517 | orchestrator | 2025-09-23 07:45:28.563524 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-09-23 07:45:28.563530 | orchestrator | Tuesday 23 September 2025 07:36:49 +0000 (0:00:01.261) 0:02:17.341 ***** 2025-09-23 07:45:28.563537 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.563543 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.563550 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.563556 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.563563 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.563569 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.563576 | orchestrator | 2025-09-23 07:45:28.563582 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-09-23 07:45:28.563589 | orchestrator | Tuesday 23 September 2025 07:36:50 +0000 (0:00:00.614) 0:02:17.956 ***** 2025-09-23 07:45:28.563595 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.563602 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.563608 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.563615 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.563621 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.563628 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.563635 | orchestrator | 2025-09-23 07:45:28.563641 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-09-23 07:45:28.563648 | orchestrator | Tuesday 23 September 2025 07:36:50 +0000 (0:00:00.803) 0:02:18.759 ***** 2025-09-23 07:45:28.563654 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.563661 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.563667 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.563674 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.563680 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.563691 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.563698 | orchestrator | 2025-09-23 07:45:28.563704 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-09-23 07:45:28.563711 | orchestrator | Tuesday 23 September 2025 07:36:51 +0000 (0:00:00.527) 0:02:19.287 ***** 2025-09-23 07:45:28.563717 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.563724 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.563730 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.563737 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.563743 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.563800 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.563816 | orchestrator | 2025-09-23 07:45:28.563823 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-09-23 07:45:28.563829 | orchestrator | Tuesday 23 September 2025 07:36:52 +0000 (0:00:00.660) 0:02:19.947 ***** 2025-09-23 07:45:28.563836 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.563843 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.563849 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.563855 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.563862 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.563873 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.563880 | orchestrator | 2025-09-23 07:45:28.563886 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-09-23 07:45:28.563893 | orchestrator | Tuesday 23 September 2025 07:36:52 +0000 (0:00:00.521) 0:02:20.469 ***** 2025-09-23 07:45:28.563899 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.563906 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.563912 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.563919 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.563925 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.563932 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.563938 | orchestrator | 2025-09-23 07:45:28.563945 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-09-23 07:45:28.563951 | orchestrator | Tuesday 23 September 2025 07:36:53 +0000 (0:00:00.714) 0:02:21.183 ***** 2025-09-23 07:45:28.563958 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.563964 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.563971 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.563977 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.563984 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.563990 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.563997 | orchestrator | 2025-09-23 07:45:28.564003 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-09-23 07:45:28.564010 | orchestrator | Tuesday 23 September 2025 07:36:53 +0000 (0:00:00.517) 0:02:21.700 ***** 2025-09-23 07:45:28.564016 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.564023 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.564029 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.564036 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.564042 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.564048 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.564055 | orchestrator | 2025-09-23 07:45:28.564062 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-09-23 07:45:28.564068 | orchestrator | Tuesday 23 September 2025 07:36:54 +0000 (0:00:00.788) 0:02:22.489 ***** 2025-09-23 07:45:28.564075 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.564081 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.564091 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.564097 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.564104 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.564110 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.564117 | orchestrator | 2025-09-23 07:45:28.564123 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-09-23 07:45:28.564130 | orchestrator | Tuesday 23 September 2025 07:36:55 +0000 (0:00:01.278) 0:02:23.768 ***** 2025-09-23 07:45:28.564137 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:45:28.564143 | orchestrator | 2025-09-23 07:45:28.564150 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-09-23 07:45:28.564157 | orchestrator | Tuesday 23 September 2025 07:36:57 +0000 (0:00:01.181) 0:02:24.949 ***** 2025-09-23 07:45:28.564163 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-09-23 07:45:28.564170 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-09-23 07:45:28.564176 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-09-23 07:45:28.564183 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-09-23 07:45:28.564189 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-09-23 07:45:28.564196 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-09-23 07:45:28.564202 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-09-23 07:45:28.564209 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-09-23 07:45:28.564215 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-09-23 07:45:28.564227 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-09-23 07:45:28.564233 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-09-23 07:45:28.564240 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-09-23 07:45:28.564246 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-09-23 07:45:28.564253 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-09-23 07:45:28.564259 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-09-23 07:45:28.564266 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-09-23 07:45:28.564272 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-09-23 07:45:28.564279 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-09-23 07:45:28.564286 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-09-23 07:45:28.564292 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-09-23 07:45:28.564303 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-09-23 07:45:28.564310 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-09-23 07:45:28.564317 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-09-23 07:45:28.564323 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-09-23 07:45:28.564330 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-09-23 07:45:28.564336 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-09-23 07:45:28.564343 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-09-23 07:45:28.564350 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-09-23 07:45:28.564356 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-09-23 07:45:28.564399 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-09-23 07:45:28.564411 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-09-23 07:45:28.564422 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-09-23 07:45:28.564433 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-09-23 07:45:28.564443 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-09-23 07:45:28.564452 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-09-23 07:45:28.564461 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-09-23 07:45:28.564469 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-09-23 07:45:28.564479 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-09-23 07:45:28.564490 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-09-23 07:45:28.564500 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-09-23 07:45:28.564510 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-09-23 07:45:28.564520 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-09-23 07:45:28.564529 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-09-23 07:45:28.564535 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-09-23 07:45:28.564541 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-09-23 07:45:28.564547 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-09-23 07:45:28.564553 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-09-23 07:45:28.564559 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-23 07:45:28.564565 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-23 07:45:28.564571 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-09-23 07:45:28.564577 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-23 07:45:28.564583 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-23 07:45:28.564595 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-23 07:45:28.564605 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-23 07:45:28.564611 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-23 07:45:28.564617 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-23 07:45:28.564624 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-23 07:45:28.564630 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-23 07:45:28.564636 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-23 07:45:28.564642 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-23 07:45:28.564648 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-23 07:45:28.564654 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-23 07:45:28.564660 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-23 07:45:28.564666 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-23 07:45:28.564672 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-23 07:45:28.564678 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-23 07:45:28.564684 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-23 07:45:28.564690 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-23 07:45:28.564696 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-23 07:45:28.564702 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-23 07:45:28.564708 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-23 07:45:28.564714 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-23 07:45:28.564720 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-23 07:45:28.564726 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-23 07:45:28.564732 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-23 07:45:28.564738 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-23 07:45:28.564744 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-23 07:45:28.564750 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-23 07:45:28.564761 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-23 07:45:28.564767 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-23 07:45:28.564773 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-23 07:45:28.564779 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-23 07:45:28.564785 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-09-23 07:45:28.564791 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-09-23 07:45:28.564797 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-09-23 07:45:28.564803 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-23 07:45:28.564809 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-23 07:45:28.564815 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-09-23 07:45:28.564822 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-09-23 07:45:28.564828 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-09-23 07:45:28.564834 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-09-23 07:45:28.564840 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-09-23 07:45:28.564854 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-09-23 07:45:28.564860 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-09-23 07:45:28.564866 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-09-23 07:45:28.564872 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-09-23 07:45:28.564878 | orchestrator | 2025-09-23 07:45:28.564885 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-09-23 07:45:28.564891 | orchestrator | Tuesday 23 September 2025 07:37:04 +0000 (0:00:07.057) 0:02:32.006 ***** 2025-09-23 07:45:28.564897 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.564903 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.564909 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.564916 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:45:28.564922 | orchestrator | 2025-09-23 07:45:28.564928 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-09-23 07:45:28.564934 | orchestrator | Tuesday 23 September 2025 07:37:05 +0000 (0:00:00.904) 0:02:32.911 ***** 2025-09-23 07:45:28.564941 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-23 07:45:28.564947 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-23 07:45:28.564953 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-23 07:45:28.564960 | orchestrator | 2025-09-23 07:45:28.564969 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-09-23 07:45:28.564975 | orchestrator | Tuesday 23 September 2025 07:37:05 +0000 (0:00:00.817) 0:02:33.729 ***** 2025-09-23 07:45:28.564981 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-23 07:45:28.564987 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-23 07:45:28.564993 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-23 07:45:28.564999 | orchestrator | 2025-09-23 07:45:28.565006 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-09-23 07:45:28.565012 | orchestrator | Tuesday 23 September 2025 07:37:07 +0000 (0:00:01.460) 0:02:35.189 ***** 2025-09-23 07:45:28.565018 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.565024 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.565030 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.565036 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.565042 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.565048 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.565054 | orchestrator | 2025-09-23 07:45:28.565060 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-09-23 07:45:28.565067 | orchestrator | Tuesday 23 September 2025 07:37:07 +0000 (0:00:00.615) 0:02:35.805 ***** 2025-09-23 07:45:28.565073 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.565079 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.565085 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.565091 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.565097 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.565103 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.565109 | orchestrator | 2025-09-23 07:45:28.565115 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-09-23 07:45:28.565121 | orchestrator | Tuesday 23 September 2025 07:37:08 +0000 (0:00:00.785) 0:02:36.590 ***** 2025-09-23 07:45:28.565127 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.565138 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.565144 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.565150 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.565156 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.565162 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.565168 | orchestrator | 2025-09-23 07:45:28.565174 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-09-23 07:45:28.565181 | orchestrator | Tuesday 23 September 2025 07:37:09 +0000 (0:00:00.630) 0:02:37.221 ***** 2025-09-23 07:45:28.565190 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.565197 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.565203 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.565209 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.565215 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.565221 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.565227 | orchestrator | 2025-09-23 07:45:28.565233 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-09-23 07:45:28.565239 | orchestrator | Tuesday 23 September 2025 07:37:09 +0000 (0:00:00.518) 0:02:37.739 ***** 2025-09-23 07:45:28.565245 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.565251 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.565257 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.565263 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.565269 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.565276 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.565282 | orchestrator | 2025-09-23 07:45:28.565288 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-09-23 07:45:28.565294 | orchestrator | Tuesday 23 September 2025 07:37:10 +0000 (0:00:00.738) 0:02:38.478 ***** 2025-09-23 07:45:28.565300 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.565306 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.565312 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.565318 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.565324 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.565331 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.565337 | orchestrator | 2025-09-23 07:45:28.565343 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-09-23 07:45:28.565349 | orchestrator | Tuesday 23 September 2025 07:37:11 +0000 (0:00:00.662) 0:02:39.141 ***** 2025-09-23 07:45:28.565356 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.565362 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.565380 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.565386 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.565392 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.565398 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.565404 | orchestrator | 2025-09-23 07:45:28.565410 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-09-23 07:45:28.565416 | orchestrator | Tuesday 23 September 2025 07:37:12 +0000 (0:00:00.874) 0:02:40.016 ***** 2025-09-23 07:45:28.565422 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.565429 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.565435 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.565441 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.565447 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.565453 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.565459 | orchestrator | 2025-09-23 07:45:28.565465 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-09-23 07:45:28.565472 | orchestrator | Tuesday 23 September 2025 07:37:13 +0000 (0:00:00.907) 0:02:40.923 ***** 2025-09-23 07:45:28.565478 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.565484 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.565494 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.565500 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.565510 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.565516 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.565522 | orchestrator | 2025-09-23 07:45:28.565528 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-09-23 07:45:28.565535 | orchestrator | Tuesday 23 September 2025 07:37:15 +0000 (0:00:02.877) 0:02:43.800 ***** 2025-09-23 07:45:28.565541 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.565547 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.565553 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.565559 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.565565 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.565571 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.565578 | orchestrator | 2025-09-23 07:45:28.565584 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-09-23 07:45:28.565590 | orchestrator | Tuesday 23 September 2025 07:37:16 +0000 (0:00:00.653) 0:02:44.454 ***** 2025-09-23 07:45:28.565596 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.565602 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.565608 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.565614 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.565620 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.565626 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.565632 | orchestrator | 2025-09-23 07:45:28.565638 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-09-23 07:45:28.565645 | orchestrator | Tuesday 23 September 2025 07:37:17 +0000 (0:00:00.936) 0:02:45.390 ***** 2025-09-23 07:45:28.565651 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.565657 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.565663 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.565669 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.565675 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.565681 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.565687 | orchestrator | 2025-09-23 07:45:28.565693 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-09-23 07:45:28.565699 | orchestrator | Tuesday 23 September 2025 07:37:18 +0000 (0:00:00.595) 0:02:45.986 ***** 2025-09-23 07:45:28.565706 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-23 07:45:28.565712 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-23 07:45:28.565718 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-23 07:45:28.565724 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.565730 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.565736 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.565742 | orchestrator | 2025-09-23 07:45:28.565753 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-09-23 07:45:28.565759 | orchestrator | Tuesday 23 September 2025 07:37:18 +0000 (0:00:00.689) 0:02:46.675 ***** 2025-09-23 07:45:28.565766 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-09-23 07:45:28.565774 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-09-23 07:45:28.565781 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.565791 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-09-23 07:45:28.565798 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-09-23 07:45:28.565804 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.565810 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-09-23 07:45:28.565817 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-09-23 07:45:28.565823 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.565829 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.565838 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.565844 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.565851 | orchestrator | 2025-09-23 07:45:28.565857 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-09-23 07:45:28.565863 | orchestrator | Tuesday 23 September 2025 07:37:19 +0000 (0:00:00.650) 0:02:47.325 ***** 2025-09-23 07:45:28.565869 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.565875 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.565881 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.565888 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.565894 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.565900 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.565906 | orchestrator | 2025-09-23 07:45:28.565912 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-09-23 07:45:28.565918 | orchestrator | Tuesday 23 September 2025 07:37:20 +0000 (0:00:00.765) 0:02:48.090 ***** 2025-09-23 07:45:28.565924 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.565931 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.565937 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.565943 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.565949 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.565955 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.565961 | orchestrator | 2025-09-23 07:45:28.565967 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-23 07:45:28.565973 | orchestrator | Tuesday 23 September 2025 07:37:20 +0000 (0:00:00.610) 0:02:48.701 ***** 2025-09-23 07:45:28.565980 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.565986 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.565992 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.565998 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.566004 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.566010 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.566035 | orchestrator | 2025-09-23 07:45:28.566043 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-23 07:45:28.566049 | orchestrator | Tuesday 23 September 2025 07:37:21 +0000 (0:00:00.716) 0:02:49.417 ***** 2025-09-23 07:45:28.566055 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.566066 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.566072 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.566078 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.566084 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.566090 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.566096 | orchestrator | 2025-09-23 07:45:28.566102 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-23 07:45:28.566109 | orchestrator | Tuesday 23 September 2025 07:37:22 +0000 (0:00:00.546) 0:02:49.964 ***** 2025-09-23 07:45:28.566115 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.566124 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.566131 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.566137 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.566143 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.566149 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.566155 | orchestrator | 2025-09-23 07:45:28.566161 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-23 07:45:28.566167 | orchestrator | Tuesday 23 September 2025 07:37:22 +0000 (0:00:00.727) 0:02:50.691 ***** 2025-09-23 07:45:28.566173 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.566179 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.566185 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.566191 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.566197 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.566204 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.566210 | orchestrator | 2025-09-23 07:45:28.566216 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-23 07:45:28.566222 | orchestrator | Tuesday 23 September 2025 07:37:23 +0000 (0:00:00.744) 0:02:51.435 ***** 2025-09-23 07:45:28.566228 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-23 07:45:28.566234 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-23 07:45:28.566240 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-23 07:45:28.566246 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.566252 | orchestrator | 2025-09-23 07:45:28.566259 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-23 07:45:28.566265 | orchestrator | Tuesday 23 September 2025 07:37:24 +0000 (0:00:00.617) 0:02:52.053 ***** 2025-09-23 07:45:28.566271 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-23 07:45:28.566277 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-23 07:45:28.566283 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-23 07:45:28.566289 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.566295 | orchestrator | 2025-09-23 07:45:28.566301 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-23 07:45:28.566307 | orchestrator | Tuesday 23 September 2025 07:37:24 +0000 (0:00:00.513) 0:02:52.566 ***** 2025-09-23 07:45:28.566314 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-23 07:45:28.566320 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-23 07:45:28.566326 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-23 07:45:28.566332 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.566338 | orchestrator | 2025-09-23 07:45:28.566344 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-23 07:45:28.566350 | orchestrator | Tuesday 23 September 2025 07:37:25 +0000 (0:00:00.692) 0:02:53.259 ***** 2025-09-23 07:45:28.566356 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.566362 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.566398 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.566404 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.566410 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.566416 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.566422 | orchestrator | 2025-09-23 07:45:28.566428 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-23 07:45:28.566445 | orchestrator | Tuesday 23 September 2025 07:37:26 +0000 (0:00:00.761) 0:02:54.020 ***** 2025-09-23 07:45:28.566451 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-23 07:45:28.566457 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-23 07:45:28.566463 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-09-23 07:45:28.566469 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-23 07:45:28.566475 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-09-23 07:45:28.566481 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.566487 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.566493 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-09-23 07:45:28.566499 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.566505 | orchestrator | 2025-09-23 07:45:28.566511 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-09-23 07:45:28.566517 | orchestrator | Tuesday 23 September 2025 07:37:28 +0000 (0:00:02.566) 0:02:56.586 ***** 2025-09-23 07:45:28.566523 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:45:28.566529 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:45:28.566535 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:45:28.566541 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:45:28.566548 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:45:28.566554 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:45:28.566560 | orchestrator | 2025-09-23 07:45:28.566566 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-23 07:45:28.566572 | orchestrator | Tuesday 23 September 2025 07:37:31 +0000 (0:00:03.114) 0:02:59.701 ***** 2025-09-23 07:45:28.566578 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:45:28.566584 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:45:28.566590 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:45:28.566596 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:45:28.566602 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:45:28.566608 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:45:28.566614 | orchestrator | 2025-09-23 07:45:28.566621 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-23 07:45:28.566627 | orchestrator | Tuesday 23 September 2025 07:37:33 +0000 (0:00:01.578) 0:03:01.280 ***** 2025-09-23 07:45:28.566633 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.566639 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.566645 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.566651 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:45:28.566657 | orchestrator | 2025-09-23 07:45:28.566664 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-23 07:45:28.566670 | orchestrator | Tuesday 23 September 2025 07:37:34 +0000 (0:00:01.073) 0:03:02.353 ***** 2025-09-23 07:45:28.566676 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.566682 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.566688 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.566694 | orchestrator | 2025-09-23 07:45:28.566705 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-23 07:45:28.566711 | orchestrator | Tuesday 23 September 2025 07:37:34 +0000 (0:00:00.358) 0:03:02.712 ***** 2025-09-23 07:45:28.566717 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:45:28.566723 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:45:28.566729 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:45:28.566735 | orchestrator | 2025-09-23 07:45:28.566741 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-23 07:45:28.566748 | orchestrator | Tuesday 23 September 2025 07:37:36 +0000 (0:00:01.401) 0:03:04.113 ***** 2025-09-23 07:45:28.566754 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-23 07:45:28.566760 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-23 07:45:28.566766 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-23 07:45:28.566777 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.566784 | orchestrator | 2025-09-23 07:45:28.566790 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-23 07:45:28.566796 | orchestrator | Tuesday 23 September 2025 07:37:37 +0000 (0:00:00.740) 0:03:04.854 ***** 2025-09-23 07:45:28.566802 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.566808 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.566814 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.566820 | orchestrator | 2025-09-23 07:45:28.566826 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-23 07:45:28.566833 | orchestrator | Tuesday 23 September 2025 07:37:37 +0000 (0:00:00.365) 0:03:05.219 ***** 2025-09-23 07:45:28.566839 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.566845 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.566851 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.566857 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:45:28.566863 | orchestrator | 2025-09-23 07:45:28.566869 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-23 07:45:28.566875 | orchestrator | Tuesday 23 September 2025 07:37:38 +0000 (0:00:00.989) 0:03:06.209 ***** 2025-09-23 07:45:28.566881 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-23 07:45:28.566887 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-23 07:45:28.566894 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-23 07:45:28.566900 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.566906 | orchestrator | 2025-09-23 07:45:28.566912 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-23 07:45:28.566918 | orchestrator | Tuesday 23 September 2025 07:37:38 +0000 (0:00:00.431) 0:03:06.641 ***** 2025-09-23 07:45:28.566924 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.566929 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.566934 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.566940 | orchestrator | 2025-09-23 07:45:28.566945 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-23 07:45:28.566950 | orchestrator | Tuesday 23 September 2025 07:37:39 +0000 (0:00:00.592) 0:03:07.233 ***** 2025-09-23 07:45:28.566956 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.566961 | orchestrator | 2025-09-23 07:45:28.566969 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-23 07:45:28.566975 | orchestrator | Tuesday 23 September 2025 07:37:39 +0000 (0:00:00.215) 0:03:07.449 ***** 2025-09-23 07:45:28.566980 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.566985 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.566991 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.566996 | orchestrator | 2025-09-23 07:45:28.567001 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-23 07:45:28.567007 | orchestrator | Tuesday 23 September 2025 07:37:40 +0000 (0:00:00.423) 0:03:07.873 ***** 2025-09-23 07:45:28.567012 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.567017 | orchestrator | 2025-09-23 07:45:28.567023 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-23 07:45:28.567028 | orchestrator | Tuesday 23 September 2025 07:37:40 +0000 (0:00:00.241) 0:03:08.114 ***** 2025-09-23 07:45:28.567034 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.567039 | orchestrator | 2025-09-23 07:45:28.567045 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-23 07:45:28.567050 | orchestrator | Tuesday 23 September 2025 07:37:40 +0000 (0:00:00.240) 0:03:08.355 ***** 2025-09-23 07:45:28.567055 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.567061 | orchestrator | 2025-09-23 07:45:28.567066 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-23 07:45:28.567071 | orchestrator | Tuesday 23 September 2025 07:37:40 +0000 (0:00:00.181) 0:03:08.536 ***** 2025-09-23 07:45:28.567081 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.567087 | orchestrator | 2025-09-23 07:45:28.567092 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-23 07:45:28.567097 | orchestrator | Tuesday 23 September 2025 07:37:41 +0000 (0:00:00.311) 0:03:08.848 ***** 2025-09-23 07:45:28.567103 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.567108 | orchestrator | 2025-09-23 07:45:28.567113 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-23 07:45:28.567118 | orchestrator | Tuesday 23 September 2025 07:37:41 +0000 (0:00:00.237) 0:03:09.085 ***** 2025-09-23 07:45:28.567124 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-23 07:45:28.567129 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-23 07:45:28.567134 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-23 07:45:28.567140 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.567145 | orchestrator | 2025-09-23 07:45:28.567150 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-23 07:45:28.567156 | orchestrator | Tuesday 23 September 2025 07:37:41 +0000 (0:00:00.562) 0:03:09.648 ***** 2025-09-23 07:45:28.567161 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.567169 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.567175 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.567180 | orchestrator | 2025-09-23 07:45:28.567185 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-23 07:45:28.567191 | orchestrator | Tuesday 23 September 2025 07:37:42 +0000 (0:00:00.690) 0:03:10.339 ***** 2025-09-23 07:45:28.567196 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.567201 | orchestrator | 2025-09-23 07:45:28.567207 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-23 07:45:28.567212 | orchestrator | Tuesday 23 September 2025 07:37:42 +0000 (0:00:00.280) 0:03:10.620 ***** 2025-09-23 07:45:28.567217 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.567223 | orchestrator | 2025-09-23 07:45:28.567228 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-23 07:45:28.567233 | orchestrator | Tuesday 23 September 2025 07:37:43 +0000 (0:00:00.267) 0:03:10.887 ***** 2025-09-23 07:45:28.567238 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.567244 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.567249 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.567254 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:45:28.567260 | orchestrator | 2025-09-23 07:45:28.567265 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-23 07:45:28.567271 | orchestrator | Tuesday 23 September 2025 07:37:44 +0000 (0:00:01.028) 0:03:11.916 ***** 2025-09-23 07:45:28.567276 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.567281 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.567287 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.567292 | orchestrator | 2025-09-23 07:45:28.567297 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-23 07:45:28.567303 | orchestrator | Tuesday 23 September 2025 07:37:44 +0000 (0:00:00.712) 0:03:12.628 ***** 2025-09-23 07:45:28.567308 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:45:28.567313 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:45:28.567319 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:45:28.567324 | orchestrator | 2025-09-23 07:45:28.567329 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-23 07:45:28.567335 | orchestrator | Tuesday 23 September 2025 07:37:46 +0000 (0:00:02.062) 0:03:14.690 ***** 2025-09-23 07:45:28.567340 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-23 07:45:28.567345 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-23 07:45:28.567351 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-23 07:45:28.567362 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.567378 | orchestrator | 2025-09-23 07:45:28.567384 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-23 07:45:28.567389 | orchestrator | Tuesday 23 September 2025 07:37:47 +0000 (0:00:00.492) 0:03:15.183 ***** 2025-09-23 07:45:28.567394 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.567400 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.567405 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.567410 | orchestrator | 2025-09-23 07:45:28.567416 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-23 07:45:28.567421 | orchestrator | Tuesday 23 September 2025 07:37:47 +0000 (0:00:00.425) 0:03:15.609 ***** 2025-09-23 07:45:28.567429 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.567435 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.567440 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.567445 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:45:28.567451 | orchestrator | 2025-09-23 07:45:28.567456 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-23 07:45:28.567462 | orchestrator | Tuesday 23 September 2025 07:37:48 +0000 (0:00:01.024) 0:03:16.633 ***** 2025-09-23 07:45:28.567467 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.567472 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.567478 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.567483 | orchestrator | 2025-09-23 07:45:28.567488 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-23 07:45:28.567494 | orchestrator | Tuesday 23 September 2025 07:37:49 +0000 (0:00:00.291) 0:03:16.925 ***** 2025-09-23 07:45:28.567499 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:45:28.567504 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:45:28.567510 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:45:28.567515 | orchestrator | 2025-09-23 07:45:28.567520 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-23 07:45:28.567526 | orchestrator | Tuesday 23 September 2025 07:37:50 +0000 (0:00:01.703) 0:03:18.629 ***** 2025-09-23 07:45:28.567531 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-23 07:45:28.567536 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-23 07:45:28.567541 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-23 07:45:28.567547 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.567552 | orchestrator | 2025-09-23 07:45:28.567557 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-23 07:45:28.567563 | orchestrator | Tuesday 23 September 2025 07:37:51 +0000 (0:00:00.809) 0:03:19.438 ***** 2025-09-23 07:45:28.567568 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.567573 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.567579 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.567584 | orchestrator | 2025-09-23 07:45:28.567590 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-09-23 07:45:28.567595 | orchestrator | Tuesday 23 September 2025 07:37:51 +0000 (0:00:00.363) 0:03:19.801 ***** 2025-09-23 07:45:28.567600 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.567606 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.567611 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.567616 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.567621 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.567627 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.567632 | orchestrator | 2025-09-23 07:45:28.567637 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-23 07:45:28.567646 | orchestrator | Tuesday 23 September 2025 07:37:52 +0000 (0:00:00.587) 0:03:20.389 ***** 2025-09-23 07:45:28.567651 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.567657 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.567666 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.567671 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:45:28.567677 | orchestrator | 2025-09-23 07:45:28.567682 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-23 07:45:28.567687 | orchestrator | Tuesday 23 September 2025 07:37:53 +0000 (0:00:01.088) 0:03:21.477 ***** 2025-09-23 07:45:28.567693 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.567698 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.567704 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.567709 | orchestrator | 2025-09-23 07:45:28.567714 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-23 07:45:28.567720 | orchestrator | Tuesday 23 September 2025 07:37:54 +0000 (0:00:00.402) 0:03:21.880 ***** 2025-09-23 07:45:28.567725 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:45:28.567730 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:45:28.567736 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:45:28.567741 | orchestrator | 2025-09-23 07:45:28.567746 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-23 07:45:28.567752 | orchestrator | Tuesday 23 September 2025 07:37:55 +0000 (0:00:01.326) 0:03:23.206 ***** 2025-09-23 07:45:28.567757 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-23 07:45:28.567762 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-23 07:45:28.567768 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-23 07:45:28.567773 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.567778 | orchestrator | 2025-09-23 07:45:28.567784 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-23 07:45:28.567789 | orchestrator | Tuesday 23 September 2025 07:37:56 +0000 (0:00:00.615) 0:03:23.822 ***** 2025-09-23 07:45:28.567794 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.567799 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.567805 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.567810 | orchestrator | 2025-09-23 07:45:28.567815 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-09-23 07:45:28.567821 | orchestrator | 2025-09-23 07:45:28.567826 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-23 07:45:28.567832 | orchestrator | Tuesday 23 September 2025 07:37:56 +0000 (0:00:00.562) 0:03:24.384 ***** 2025-09-23 07:45:28.567837 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:45:28.567842 | orchestrator | 2025-09-23 07:45:28.567848 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-23 07:45:28.567853 | orchestrator | Tuesday 23 September 2025 07:37:57 +0000 (0:00:00.725) 0:03:25.110 ***** 2025-09-23 07:45:28.567859 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:45:28.567864 | orchestrator | 2025-09-23 07:45:28.567872 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-23 07:45:28.567877 | orchestrator | Tuesday 23 September 2025 07:37:57 +0000 (0:00:00.538) 0:03:25.648 ***** 2025-09-23 07:45:28.567883 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.567888 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.567893 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.567899 | orchestrator | 2025-09-23 07:45:28.567904 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-23 07:45:28.567910 | orchestrator | Tuesday 23 September 2025 07:37:58 +0000 (0:00:00.898) 0:03:26.546 ***** 2025-09-23 07:45:28.567915 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.567920 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.567926 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.567931 | orchestrator | 2025-09-23 07:45:28.567937 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-23 07:45:28.567946 | orchestrator | Tuesday 23 September 2025 07:37:59 +0000 (0:00:00.974) 0:03:27.520 ***** 2025-09-23 07:45:28.567951 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.567957 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.567962 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.567967 | orchestrator | 2025-09-23 07:45:28.567973 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-23 07:45:28.567978 | orchestrator | Tuesday 23 September 2025 07:38:00 +0000 (0:00:00.737) 0:03:28.258 ***** 2025-09-23 07:45:28.567983 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.567989 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.567994 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.567999 | orchestrator | 2025-09-23 07:45:28.568005 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-23 07:45:28.568010 | orchestrator | Tuesday 23 September 2025 07:38:00 +0000 (0:00:00.365) 0:03:28.624 ***** 2025-09-23 07:45:28.568015 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.568021 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.568026 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.568031 | orchestrator | 2025-09-23 07:45:28.568037 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-23 07:45:28.568042 | orchestrator | Tuesday 23 September 2025 07:38:01 +0000 (0:00:00.749) 0:03:29.373 ***** 2025-09-23 07:45:28.568047 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.568053 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.568058 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.568063 | orchestrator | 2025-09-23 07:45:28.568069 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-23 07:45:28.568074 | orchestrator | Tuesday 23 September 2025 07:38:01 +0000 (0:00:00.326) 0:03:29.699 ***** 2025-09-23 07:45:28.568079 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.568085 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.568090 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.568095 | orchestrator | 2025-09-23 07:45:28.568104 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-23 07:45:28.568109 | orchestrator | Tuesday 23 September 2025 07:38:02 +0000 (0:00:00.425) 0:03:30.125 ***** 2025-09-23 07:45:28.568115 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.568120 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.568125 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.568130 | orchestrator | 2025-09-23 07:45:28.568136 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-23 07:45:28.568141 | orchestrator | Tuesday 23 September 2025 07:38:03 +0000 (0:00:00.777) 0:03:30.903 ***** 2025-09-23 07:45:28.568147 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.568152 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.568157 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.568163 | orchestrator | 2025-09-23 07:45:28.568168 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-23 07:45:28.568173 | orchestrator | Tuesday 23 September 2025 07:38:04 +0000 (0:00:00.922) 0:03:31.825 ***** 2025-09-23 07:45:28.568179 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.568184 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.568189 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.568195 | orchestrator | 2025-09-23 07:45:28.568200 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-23 07:45:28.568206 | orchestrator | Tuesday 23 September 2025 07:38:04 +0000 (0:00:00.336) 0:03:32.162 ***** 2025-09-23 07:45:28.568211 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.568216 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.568222 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.568227 | orchestrator | 2025-09-23 07:45:28.568232 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-23 07:45:28.568238 | orchestrator | Tuesday 23 September 2025 07:38:04 +0000 (0:00:00.494) 0:03:32.656 ***** 2025-09-23 07:45:28.568248 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.568253 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.568258 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.568264 | orchestrator | 2025-09-23 07:45:28.568269 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-23 07:45:28.568274 | orchestrator | Tuesday 23 September 2025 07:38:05 +0000 (0:00:00.378) 0:03:33.035 ***** 2025-09-23 07:45:28.568280 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.568285 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.568290 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.568295 | orchestrator | 2025-09-23 07:45:28.568301 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-23 07:45:28.568306 | orchestrator | Tuesday 23 September 2025 07:38:05 +0000 (0:00:00.328) 0:03:33.364 ***** 2025-09-23 07:45:28.568311 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.568317 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.568322 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.568327 | orchestrator | 2025-09-23 07:45:28.568333 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-23 07:45:28.568338 | orchestrator | Tuesday 23 September 2025 07:38:05 +0000 (0:00:00.297) 0:03:33.661 ***** 2025-09-23 07:45:28.568343 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.568349 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.568354 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.568359 | orchestrator | 2025-09-23 07:45:28.568377 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-23 07:45:28.568383 | orchestrator | Tuesday 23 September 2025 07:38:06 +0000 (0:00:00.464) 0:03:34.125 ***** 2025-09-23 07:45:28.568388 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.568393 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.568399 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.568404 | orchestrator | 2025-09-23 07:45:28.568409 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-23 07:45:28.568415 | orchestrator | Tuesday 23 September 2025 07:38:06 +0000 (0:00:00.284) 0:03:34.409 ***** 2025-09-23 07:45:28.568420 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.568426 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.568431 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.568436 | orchestrator | 2025-09-23 07:45:28.568442 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-23 07:45:28.568447 | orchestrator | Tuesday 23 September 2025 07:38:06 +0000 (0:00:00.301) 0:03:34.711 ***** 2025-09-23 07:45:28.568452 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.568458 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.568463 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.568468 | orchestrator | 2025-09-23 07:45:28.568474 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-23 07:45:28.568479 | orchestrator | Tuesday 23 September 2025 07:38:07 +0000 (0:00:00.328) 0:03:35.039 ***** 2025-09-23 07:45:28.568484 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.568490 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.568495 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.568500 | orchestrator | 2025-09-23 07:45:28.568506 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-09-23 07:45:28.568511 | orchestrator | Tuesday 23 September 2025 07:38:07 +0000 (0:00:00.693) 0:03:35.732 ***** 2025-09-23 07:45:28.568516 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.568521 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.568527 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.568532 | orchestrator | 2025-09-23 07:45:28.568537 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-09-23 07:45:28.568543 | orchestrator | Tuesday 23 September 2025 07:38:08 +0000 (0:00:00.309) 0:03:36.042 ***** 2025-09-23 07:45:28.568548 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:45:28.568557 | orchestrator | 2025-09-23 07:45:28.568563 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-09-23 07:45:28.568568 | orchestrator | Tuesday 23 September 2025 07:38:08 +0000 (0:00:00.517) 0:03:36.560 ***** 2025-09-23 07:45:28.568574 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.568579 | orchestrator | 2025-09-23 07:45:28.568584 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-09-23 07:45:28.568593 | orchestrator | Tuesday 23 September 2025 07:38:09 +0000 (0:00:00.288) 0:03:36.848 ***** 2025-09-23 07:45:28.568599 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-09-23 07:45:28.568604 | orchestrator | 2025-09-23 07:45:28.568609 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-09-23 07:45:28.568615 | orchestrator | Tuesday 23 September 2025 07:38:09 +0000 (0:00:00.966) 0:03:37.814 ***** 2025-09-23 07:45:28.568620 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.568625 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.568631 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.568636 | orchestrator | 2025-09-23 07:45:28.568641 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-09-23 07:45:28.568647 | orchestrator | Tuesday 23 September 2025 07:38:10 +0000 (0:00:00.356) 0:03:38.171 ***** 2025-09-23 07:45:28.568652 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.568657 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.568663 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.568668 | orchestrator | 2025-09-23 07:45:28.568674 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-09-23 07:45:28.568679 | orchestrator | Tuesday 23 September 2025 07:38:10 +0000 (0:00:00.442) 0:03:38.614 ***** 2025-09-23 07:45:28.568685 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:45:28.568690 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:45:28.568695 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:45:28.568701 | orchestrator | 2025-09-23 07:45:28.568706 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-09-23 07:45:28.568711 | orchestrator | Tuesday 23 September 2025 07:38:12 +0000 (0:00:01.488) 0:03:40.102 ***** 2025-09-23 07:45:28.568717 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:45:28.568722 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:45:28.568727 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:45:28.568733 | orchestrator | 2025-09-23 07:45:28.568738 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-09-23 07:45:28.568743 | orchestrator | Tuesday 23 September 2025 07:38:13 +0000 (0:00:01.143) 0:03:41.246 ***** 2025-09-23 07:45:28.568749 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:45:28.568754 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:45:28.568760 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:45:28.568765 | orchestrator | 2025-09-23 07:45:28.568770 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-09-23 07:45:28.568776 | orchestrator | Tuesday 23 September 2025 07:38:14 +0000 (0:00:00.717) 0:03:41.964 ***** 2025-09-23 07:45:28.568781 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.568786 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.568792 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.568797 | orchestrator | 2025-09-23 07:45:28.568802 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-09-23 07:45:28.568808 | orchestrator | Tuesday 23 September 2025 07:38:14 +0000 (0:00:00.729) 0:03:42.693 ***** 2025-09-23 07:45:28.568813 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:45:28.568818 | orchestrator | 2025-09-23 07:45:28.568824 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-09-23 07:45:28.568829 | orchestrator | Tuesday 23 September 2025 07:38:16 +0000 (0:00:01.296) 0:03:43.990 ***** 2025-09-23 07:45:28.568834 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.568840 | orchestrator | 2025-09-23 07:45:28.568845 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-09-23 07:45:28.568857 | orchestrator | Tuesday 23 September 2025 07:38:16 +0000 (0:00:00.759) 0:03:44.749 ***** 2025-09-23 07:45:28.568862 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-23 07:45:28.568868 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-23 07:45:28.568873 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-23 07:45:28.568878 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-23 07:45:28.568884 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-09-23 07:45:28.568889 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-23 07:45:28.568894 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-23 07:45:28.568900 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-09-23 07:45:28.568905 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-23 07:45:28.568910 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-09-23 07:45:28.568916 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-09-23 07:45:28.568921 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-09-23 07:45:28.568926 | orchestrator | 2025-09-23 07:45:28.568932 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-09-23 07:45:28.568937 | orchestrator | Tuesday 23 September 2025 07:38:20 +0000 (0:00:03.221) 0:03:47.971 ***** 2025-09-23 07:45:28.568942 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:45:28.568948 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:45:28.568953 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:45:28.568958 | orchestrator | 2025-09-23 07:45:28.568964 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-09-23 07:45:28.568969 | orchestrator | Tuesday 23 September 2025 07:38:21 +0000 (0:00:01.344) 0:03:49.315 ***** 2025-09-23 07:45:28.568975 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.568980 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.568985 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.568991 | orchestrator | 2025-09-23 07:45:28.568996 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-09-23 07:45:28.569001 | orchestrator | Tuesday 23 September 2025 07:38:21 +0000 (0:00:00.282) 0:03:49.598 ***** 2025-09-23 07:45:28.569007 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.569012 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.569017 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.569022 | orchestrator | 2025-09-23 07:45:28.569028 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-09-23 07:45:28.569033 | orchestrator | Tuesday 23 September 2025 07:38:22 +0000 (0:00:00.281) 0:03:49.880 ***** 2025-09-23 07:45:28.569039 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:45:28.569044 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:45:28.569049 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:45:28.569055 | orchestrator | 2025-09-23 07:45:28.569063 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-09-23 07:45:28.569069 | orchestrator | Tuesday 23 September 2025 07:38:23 +0000 (0:00:01.512) 0:03:51.392 ***** 2025-09-23 07:45:28.569074 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:45:28.569079 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:45:28.569085 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:45:28.569090 | orchestrator | 2025-09-23 07:45:28.569095 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-09-23 07:45:28.569101 | orchestrator | Tuesday 23 September 2025 07:38:25 +0000 (0:00:01.534) 0:03:52.927 ***** 2025-09-23 07:45:28.569106 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.569111 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.569117 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.569122 | orchestrator | 2025-09-23 07:45:28.569128 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-09-23 07:45:28.569139 | orchestrator | Tuesday 23 September 2025 07:38:25 +0000 (0:00:00.304) 0:03:53.231 ***** 2025-09-23 07:45:28.569144 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:45:28.569149 | orchestrator | 2025-09-23 07:45:28.569155 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-09-23 07:45:28.569160 | orchestrator | Tuesday 23 September 2025 07:38:25 +0000 (0:00:00.534) 0:03:53.765 ***** 2025-09-23 07:45:28.569165 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.569171 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.569176 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.569181 | orchestrator | 2025-09-23 07:45:28.569187 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-09-23 07:45:28.569192 | orchestrator | Tuesday 23 September 2025 07:38:26 +0000 (0:00:00.408) 0:03:54.174 ***** 2025-09-23 07:45:28.569197 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.569203 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.569208 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.569213 | orchestrator | 2025-09-23 07:45:28.569219 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-09-23 07:45:28.569224 | orchestrator | Tuesday 23 September 2025 07:38:26 +0000 (0:00:00.292) 0:03:54.467 ***** 2025-09-23 07:45:28.569230 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:45:28.569235 | orchestrator | 2025-09-23 07:45:28.569240 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-09-23 07:45:28.569246 | orchestrator | Tuesday 23 September 2025 07:38:27 +0000 (0:00:00.460) 0:03:54.927 ***** 2025-09-23 07:45:28.569251 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:45:28.569256 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:45:28.569262 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:45:28.569267 | orchestrator | 2025-09-23 07:45:28.569272 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-09-23 07:45:28.569278 | orchestrator | Tuesday 23 September 2025 07:38:28 +0000 (0:00:01.668) 0:03:56.596 ***** 2025-09-23 07:45:28.569283 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:45:28.569288 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:45:28.569294 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:45:28.569299 | orchestrator | 2025-09-23 07:45:28.569307 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-09-23 07:45:28.569312 | orchestrator | Tuesday 23 September 2025 07:38:30 +0000 (0:00:01.384) 0:03:57.981 ***** 2025-09-23 07:45:28.569318 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:45:28.569323 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:45:28.569328 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:45:28.569334 | orchestrator | 2025-09-23 07:45:28.569339 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-09-23 07:45:28.569344 | orchestrator | Tuesday 23 September 2025 07:38:31 +0000 (0:00:01.839) 0:03:59.820 ***** 2025-09-23 07:45:28.569349 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:45:28.569355 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:45:28.569360 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:45:28.569377 | orchestrator | 2025-09-23 07:45:28.569383 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-09-23 07:45:28.569388 | orchestrator | Tuesday 23 September 2025 07:38:33 +0000 (0:00:01.876) 0:04:01.696 ***** 2025-09-23 07:45:28.569394 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:45:28.569399 | orchestrator | 2025-09-23 07:45:28.569405 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-09-23 07:45:28.569410 | orchestrator | Tuesday 23 September 2025 07:38:34 +0000 (0:00:00.858) 0:04:02.555 ***** 2025-09-23 07:45:28.569415 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-09-23 07:45:28.569425 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.569430 | orchestrator | 2025-09-23 07:45:28.569435 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-09-23 07:45:28.569441 | orchestrator | Tuesday 23 September 2025 07:38:56 +0000 (0:00:21.873) 0:04:24.429 ***** 2025-09-23 07:45:28.569446 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.569452 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.569457 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.569463 | orchestrator | 2025-09-23 07:45:28.569468 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-09-23 07:45:28.569473 | orchestrator | Tuesday 23 September 2025 07:39:07 +0000 (0:00:11.299) 0:04:35.728 ***** 2025-09-23 07:45:28.569479 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.569484 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.569489 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.569495 | orchestrator | 2025-09-23 07:45:28.569500 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-09-23 07:45:28.569506 | orchestrator | Tuesday 23 September 2025 07:39:08 +0000 (0:00:00.275) 0:04:36.004 ***** 2025-09-23 07:45:28.569515 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ea3035b4934c712bdef72aa7d0c892905c23fabb'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-09-23 07:45:28.569522 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ea3035b4934c712bdef72aa7d0c892905c23fabb'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-09-23 07:45:28.569529 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ea3035b4934c712bdef72aa7d0c892905c23fabb'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-09-23 07:45:28.569535 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ea3035b4934c712bdef72aa7d0c892905c23fabb'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-09-23 07:45:28.569541 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ea3035b4934c712bdef72aa7d0c892905c23fabb'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-09-23 07:45:28.569550 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ea3035b4934c712bdef72aa7d0c892905c23fabb'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__ea3035b4934c712bdef72aa7d0c892905c23fabb'}])  2025-09-23 07:45:28.569556 | orchestrator | 2025-09-23 07:45:28.569562 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-23 07:45:28.569568 | orchestrator | Tuesday 23 September 2025 07:39:22 +0000 (0:00:14.518) 0:04:50.522 ***** 2025-09-23 07:45:28.569573 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.569578 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.569587 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.569593 | orchestrator | 2025-09-23 07:45:28.569598 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-23 07:45:28.569603 | orchestrator | Tuesday 23 September 2025 07:39:23 +0000 (0:00:00.324) 0:04:50.846 ***** 2025-09-23 07:45:28.569609 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:45:28.569614 | orchestrator | 2025-09-23 07:45:28.569620 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-23 07:45:28.569625 | orchestrator | Tuesday 23 September 2025 07:39:23 +0000 (0:00:00.482) 0:04:51.328 ***** 2025-09-23 07:45:28.569630 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.569636 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.569641 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.569647 | orchestrator | 2025-09-23 07:45:28.569652 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-23 07:45:28.569658 | orchestrator | Tuesday 23 September 2025 07:39:23 +0000 (0:00:00.444) 0:04:51.773 ***** 2025-09-23 07:45:28.569663 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.569668 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.569674 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.569679 | orchestrator | 2025-09-23 07:45:28.569684 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-23 07:45:28.569690 | orchestrator | Tuesday 23 September 2025 07:39:24 +0000 (0:00:00.274) 0:04:52.047 ***** 2025-09-23 07:45:28.569695 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-23 07:45:28.569700 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-23 07:45:28.569706 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-23 07:45:28.569711 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.569716 | orchestrator | 2025-09-23 07:45:28.569722 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-23 07:45:28.569727 | orchestrator | Tuesday 23 September 2025 07:39:24 +0000 (0:00:00.545) 0:04:52.592 ***** 2025-09-23 07:45:28.569732 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.569738 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.569743 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.569748 | orchestrator | 2025-09-23 07:45:28.569757 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-09-23 07:45:28.569763 | orchestrator | 2025-09-23 07:45:28.569768 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-23 07:45:28.569773 | orchestrator | Tuesday 23 September 2025 07:39:25 +0000 (0:00:00.641) 0:04:53.233 ***** 2025-09-23 07:45:28.569779 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:45:28.569784 | orchestrator | 2025-09-23 07:45:28.569790 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-23 07:45:28.569795 | orchestrator | Tuesday 23 September 2025 07:39:25 +0000 (0:00:00.449) 0:04:53.683 ***** 2025-09-23 07:45:28.569800 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:45:28.569806 | orchestrator | 2025-09-23 07:45:28.569811 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-23 07:45:28.569817 | orchestrator | Tuesday 23 September 2025 07:39:26 +0000 (0:00:00.475) 0:04:54.159 ***** 2025-09-23 07:45:28.569822 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.569827 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.569833 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.569838 | orchestrator | 2025-09-23 07:45:28.569843 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-23 07:45:28.569849 | orchestrator | Tuesday 23 September 2025 07:39:27 +0000 (0:00:00.834) 0:04:54.994 ***** 2025-09-23 07:45:28.569854 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.569863 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.569868 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.569873 | orchestrator | 2025-09-23 07:45:28.569879 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-23 07:45:28.569884 | orchestrator | Tuesday 23 September 2025 07:39:27 +0000 (0:00:00.313) 0:04:55.308 ***** 2025-09-23 07:45:28.569889 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.569895 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.569900 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.569906 | orchestrator | 2025-09-23 07:45:28.569911 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-23 07:45:28.569916 | orchestrator | Tuesday 23 September 2025 07:39:27 +0000 (0:00:00.266) 0:04:55.574 ***** 2025-09-23 07:45:28.569922 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.569927 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.569932 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.569938 | orchestrator | 2025-09-23 07:45:28.569943 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-23 07:45:28.569948 | orchestrator | Tuesday 23 September 2025 07:39:28 +0000 (0:00:00.260) 0:04:55.834 ***** 2025-09-23 07:45:28.569954 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.569959 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.569964 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.569970 | orchestrator | 2025-09-23 07:45:28.569975 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-23 07:45:28.569981 | orchestrator | Tuesday 23 September 2025 07:39:28 +0000 (0:00:00.847) 0:04:56.681 ***** 2025-09-23 07:45:28.569986 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.569991 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.570010 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.570098 | orchestrator | 2025-09-23 07:45:28.570106 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-23 07:45:28.570111 | orchestrator | Tuesday 23 September 2025 07:39:29 +0000 (0:00:00.295) 0:04:56.977 ***** 2025-09-23 07:45:28.570117 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.570122 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.570127 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.570133 | orchestrator | 2025-09-23 07:45:28.570138 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-23 07:45:28.570144 | orchestrator | Tuesday 23 September 2025 07:39:29 +0000 (0:00:00.290) 0:04:57.267 ***** 2025-09-23 07:45:28.570149 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.570154 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.570160 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.570165 | orchestrator | 2025-09-23 07:45:28.570171 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-23 07:45:28.570176 | orchestrator | Tuesday 23 September 2025 07:39:30 +0000 (0:00:00.767) 0:04:58.035 ***** 2025-09-23 07:45:28.570181 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.570187 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.570192 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.570198 | orchestrator | 2025-09-23 07:45:28.570203 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-23 07:45:28.570208 | orchestrator | Tuesday 23 September 2025 07:39:31 +0000 (0:00:00.881) 0:04:58.916 ***** 2025-09-23 07:45:28.570214 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.570219 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.570224 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.570230 | orchestrator | 2025-09-23 07:45:28.570235 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-23 07:45:28.570241 | orchestrator | Tuesday 23 September 2025 07:39:31 +0000 (0:00:00.263) 0:04:59.180 ***** 2025-09-23 07:45:28.570246 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.570251 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.570261 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.570266 | orchestrator | 2025-09-23 07:45:28.570272 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-23 07:45:28.570277 | orchestrator | Tuesday 23 September 2025 07:39:31 +0000 (0:00:00.330) 0:04:59.511 ***** 2025-09-23 07:45:28.570282 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.570288 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.570293 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.570298 | orchestrator | 2025-09-23 07:45:28.570304 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-23 07:45:28.570309 | orchestrator | Tuesday 23 September 2025 07:39:31 +0000 (0:00:00.273) 0:04:59.784 ***** 2025-09-23 07:45:28.570314 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.570320 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.570345 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.570352 | orchestrator | 2025-09-23 07:45:28.570357 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-23 07:45:28.570392 | orchestrator | Tuesday 23 September 2025 07:39:32 +0000 (0:00:00.529) 0:05:00.314 ***** 2025-09-23 07:45:28.570399 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.570404 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.570409 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.570415 | orchestrator | 2025-09-23 07:45:28.570420 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-23 07:45:28.570426 | orchestrator | Tuesday 23 September 2025 07:39:32 +0000 (0:00:00.310) 0:05:00.624 ***** 2025-09-23 07:45:28.570431 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.570436 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.570441 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.570447 | orchestrator | 2025-09-23 07:45:28.570452 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-23 07:45:28.570458 | orchestrator | Tuesday 23 September 2025 07:39:33 +0000 (0:00:00.372) 0:05:00.997 ***** 2025-09-23 07:45:28.570463 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.570468 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.570474 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.570479 | orchestrator | 2025-09-23 07:45:28.570484 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-23 07:45:28.570490 | orchestrator | Tuesday 23 September 2025 07:39:33 +0000 (0:00:00.336) 0:05:01.333 ***** 2025-09-23 07:45:28.570495 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.570500 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.570506 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.570511 | orchestrator | 2025-09-23 07:45:28.570517 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-23 07:45:28.570522 | orchestrator | Tuesday 23 September 2025 07:39:33 +0000 (0:00:00.421) 0:05:01.755 ***** 2025-09-23 07:45:28.570527 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.570533 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.570538 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.570543 | orchestrator | 2025-09-23 07:45:28.570549 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-23 07:45:28.570554 | orchestrator | Tuesday 23 September 2025 07:39:34 +0000 (0:00:00.622) 0:05:02.377 ***** 2025-09-23 07:45:28.570559 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.570564 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.570570 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.570575 | orchestrator | 2025-09-23 07:45:28.570580 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-09-23 07:45:28.570586 | orchestrator | Tuesday 23 September 2025 07:39:35 +0000 (0:00:00.609) 0:05:02.986 ***** 2025-09-23 07:45:28.570591 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-23 07:45:28.570596 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-23 07:45:28.570602 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-23 07:45:28.570614 | orchestrator | 2025-09-23 07:45:28.570619 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-09-23 07:45:28.570625 | orchestrator | Tuesday 23 September 2025 07:39:36 +0000 (0:00:00.975) 0:05:03.962 ***** 2025-09-23 07:45:28.570634 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:45:28.570639 | orchestrator | 2025-09-23 07:45:28.570645 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-09-23 07:45:28.570650 | orchestrator | Tuesday 23 September 2025 07:39:36 +0000 (0:00:00.847) 0:05:04.809 ***** 2025-09-23 07:45:28.570655 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:45:28.570661 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:45:28.570666 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:45:28.570671 | orchestrator | 2025-09-23 07:45:28.570677 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-09-23 07:45:28.570682 | orchestrator | Tuesday 23 September 2025 07:39:37 +0000 (0:00:00.718) 0:05:05.528 ***** 2025-09-23 07:45:28.570687 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.570693 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.570698 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.570704 | orchestrator | 2025-09-23 07:45:28.570709 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-09-23 07:45:28.570714 | orchestrator | Tuesday 23 September 2025 07:39:38 +0000 (0:00:00.402) 0:05:05.930 ***** 2025-09-23 07:45:28.570720 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-23 07:45:28.570725 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-23 07:45:28.570730 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-23 07:45:28.570736 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-09-23 07:45:28.570741 | orchestrator | 2025-09-23 07:45:28.570746 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-09-23 07:45:28.570752 | orchestrator | Tuesday 23 September 2025 07:39:48 +0000 (0:00:10.826) 0:05:16.757 ***** 2025-09-23 07:45:28.570757 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.570762 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.570768 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.570773 | orchestrator | 2025-09-23 07:45:28.570778 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-09-23 07:45:28.570783 | orchestrator | Tuesday 23 September 2025 07:39:49 +0000 (0:00:00.495) 0:05:17.252 ***** 2025-09-23 07:45:28.570789 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-23 07:45:28.570794 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-23 07:45:28.570799 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-23 07:45:28.570805 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-23 07:45:28.570810 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-23 07:45:28.570815 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-23 07:45:28.570821 | orchestrator | 2025-09-23 07:45:28.570845 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-09-23 07:45:28.570851 | orchestrator | Tuesday 23 September 2025 07:39:51 +0000 (0:00:01.988) 0:05:19.240 ***** 2025-09-23 07:45:28.570856 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-23 07:45:28.570862 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-23 07:45:28.570867 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-23 07:45:28.570873 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-23 07:45:28.570878 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-23 07:45:28.570883 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-23 07:45:28.570888 | orchestrator | 2025-09-23 07:45:28.570894 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-09-23 07:45:28.570904 | orchestrator | Tuesday 23 September 2025 07:39:52 +0000 (0:00:01.371) 0:05:20.612 ***** 2025-09-23 07:45:28.570909 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.570914 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.570919 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.570925 | orchestrator | 2025-09-23 07:45:28.570930 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-09-23 07:45:28.570935 | orchestrator | Tuesday 23 September 2025 07:39:53 +0000 (0:00:00.705) 0:05:21.317 ***** 2025-09-23 07:45:28.570941 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.570946 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.570950 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.570955 | orchestrator | 2025-09-23 07:45:28.570960 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-09-23 07:45:28.570964 | orchestrator | Tuesday 23 September 2025 07:39:53 +0000 (0:00:00.255) 0:05:21.572 ***** 2025-09-23 07:45:28.570969 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.570974 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.570979 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.570984 | orchestrator | 2025-09-23 07:45:28.570988 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-09-23 07:45:28.570993 | orchestrator | Tuesday 23 September 2025 07:39:54 +0000 (0:00:00.512) 0:05:22.084 ***** 2025-09-23 07:45:28.570998 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-1, testbed-node-2, testbed-node-0 2025-09-23 07:45:28.571003 | orchestrator | 2025-09-23 07:45:28.571007 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-09-23 07:45:28.571012 | orchestrator | Tuesday 23 September 2025 07:39:54 +0000 (0:00:00.668) 0:05:22.753 ***** 2025-09-23 07:45:28.571017 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.571021 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.571026 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.571031 | orchestrator | 2025-09-23 07:45:28.571036 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-09-23 07:45:28.571041 | orchestrator | Tuesday 23 September 2025 07:39:55 +0000 (0:00:00.327) 0:05:23.080 ***** 2025-09-23 07:45:28.571045 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.571050 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.571055 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.571060 | orchestrator | 2025-09-23 07:45:28.571064 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-09-23 07:45:28.571072 | orchestrator | Tuesday 23 September 2025 07:39:55 +0000 (0:00:00.631) 0:05:23.712 ***** 2025-09-23 07:45:28.571077 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:45:28.571082 | orchestrator | 2025-09-23 07:45:28.571087 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-09-23 07:45:28.571091 | orchestrator | Tuesday 23 September 2025 07:39:56 +0000 (0:00:00.545) 0:05:24.257 ***** 2025-09-23 07:45:28.571096 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:45:28.571101 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:45:28.571106 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:45:28.571110 | orchestrator | 2025-09-23 07:45:28.571115 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-09-23 07:45:28.571120 | orchestrator | Tuesday 23 September 2025 07:39:57 +0000 (0:00:01.275) 0:05:25.532 ***** 2025-09-23 07:45:28.571124 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:45:28.571129 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:45:28.571134 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:45:28.571139 | orchestrator | 2025-09-23 07:45:28.571143 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-09-23 07:45:28.571148 | orchestrator | Tuesday 23 September 2025 07:39:59 +0000 (0:00:01.532) 0:05:27.065 ***** 2025-09-23 07:45:28.571153 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:45:28.571161 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:45:28.571166 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:45:28.571171 | orchestrator | 2025-09-23 07:45:28.571176 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-09-23 07:45:28.571180 | orchestrator | Tuesday 23 September 2025 07:40:01 +0000 (0:00:01.914) 0:05:28.979 ***** 2025-09-23 07:45:28.571185 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:45:28.571190 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:45:28.571194 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:45:28.571199 | orchestrator | 2025-09-23 07:45:28.571204 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-09-23 07:45:28.571209 | orchestrator | Tuesday 23 September 2025 07:40:03 +0000 (0:00:02.117) 0:05:31.096 ***** 2025-09-23 07:45:28.571213 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.571218 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.571223 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-09-23 07:45:28.571228 | orchestrator | 2025-09-23 07:45:28.571232 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-09-23 07:45:28.571237 | orchestrator | Tuesday 23 September 2025 07:40:03 +0000 (0:00:00.433) 0:05:31.529 ***** 2025-09-23 07:45:28.571242 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-09-23 07:45:28.571261 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-09-23 07:45:28.571267 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-09-23 07:45:28.571272 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-09-23 07:45:28.571277 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-09-23 07:45:28.571282 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-23 07:45:28.571287 | orchestrator | 2025-09-23 07:45:28.571291 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-09-23 07:45:28.571296 | orchestrator | Tuesday 23 September 2025 07:40:34 +0000 (0:00:30.880) 0:06:02.410 ***** 2025-09-23 07:45:28.571301 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-23 07:45:28.571306 | orchestrator | 2025-09-23 07:45:28.571310 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-09-23 07:45:28.571315 | orchestrator | Tuesday 23 September 2025 07:40:35 +0000 (0:00:01.364) 0:06:03.775 ***** 2025-09-23 07:45:28.571320 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.571325 | orchestrator | 2025-09-23 07:45:28.571329 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-09-23 07:45:28.571334 | orchestrator | Tuesday 23 September 2025 07:40:36 +0000 (0:00:00.341) 0:06:04.116 ***** 2025-09-23 07:45:28.571339 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.571344 | orchestrator | 2025-09-23 07:45:28.571348 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-09-23 07:45:28.571353 | orchestrator | Tuesday 23 September 2025 07:40:36 +0000 (0:00:00.146) 0:06:04.263 ***** 2025-09-23 07:45:28.571358 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-09-23 07:45:28.571374 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-09-23 07:45:28.571379 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-09-23 07:45:28.571384 | orchestrator | 2025-09-23 07:45:28.571389 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-09-23 07:45:28.571394 | orchestrator | Tuesday 23 September 2025 07:40:43 +0000 (0:00:06.806) 0:06:11.069 ***** 2025-09-23 07:45:28.571398 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-09-23 07:45:28.571403 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-09-23 07:45:28.571412 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-09-23 07:45:28.571417 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-09-23 07:45:28.571422 | orchestrator | 2025-09-23 07:45:28.571426 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-23 07:45:28.571431 | orchestrator | Tuesday 23 September 2025 07:40:48 +0000 (0:00:04.789) 0:06:15.859 ***** 2025-09-23 07:45:28.571436 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:45:28.571441 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:45:28.571449 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:45:28.571453 | orchestrator | 2025-09-23 07:45:28.571458 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-23 07:45:28.571463 | orchestrator | Tuesday 23 September 2025 07:40:49 +0000 (0:00:00.982) 0:06:16.841 ***** 2025-09-23 07:45:28.571468 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:45:28.571473 | orchestrator | 2025-09-23 07:45:28.571477 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-23 07:45:28.571482 | orchestrator | Tuesday 23 September 2025 07:40:49 +0000 (0:00:00.529) 0:06:17.371 ***** 2025-09-23 07:45:28.571487 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.571492 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.571496 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.571501 | orchestrator | 2025-09-23 07:45:28.571506 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-23 07:45:28.571511 | orchestrator | Tuesday 23 September 2025 07:40:49 +0000 (0:00:00.321) 0:06:17.693 ***** 2025-09-23 07:45:28.571515 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:45:28.571520 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:45:28.571525 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:45:28.571530 | orchestrator | 2025-09-23 07:45:28.571534 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-23 07:45:28.571539 | orchestrator | Tuesday 23 September 2025 07:40:51 +0000 (0:00:01.481) 0:06:19.174 ***** 2025-09-23 07:45:28.571544 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-23 07:45:28.571549 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-23 07:45:28.571553 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-23 07:45:28.571558 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.571563 | orchestrator | 2025-09-23 07:45:28.571568 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-23 07:45:28.571573 | orchestrator | Tuesday 23 September 2025 07:40:52 +0000 (0:00:00.675) 0:06:19.850 ***** 2025-09-23 07:45:28.571578 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.571582 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.571587 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.571592 | orchestrator | 2025-09-23 07:45:28.571597 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-09-23 07:45:28.571601 | orchestrator | 2025-09-23 07:45:28.571606 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-23 07:45:28.571611 | orchestrator | Tuesday 23 September 2025 07:40:52 +0000 (0:00:00.633) 0:06:20.484 ***** 2025-09-23 07:45:28.571616 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:45:28.571621 | orchestrator | 2025-09-23 07:45:28.571642 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-23 07:45:28.571647 | orchestrator | Tuesday 23 September 2025 07:40:53 +0000 (0:00:00.763) 0:06:21.247 ***** 2025-09-23 07:45:28.571652 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:45:28.571657 | orchestrator | 2025-09-23 07:45:28.571662 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-23 07:45:28.571672 | orchestrator | Tuesday 23 September 2025 07:40:53 +0000 (0:00:00.555) 0:06:21.803 ***** 2025-09-23 07:45:28.571677 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.571681 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.571686 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.571691 | orchestrator | 2025-09-23 07:45:28.571696 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-23 07:45:28.571700 | orchestrator | Tuesday 23 September 2025 07:40:54 +0000 (0:00:00.302) 0:06:22.105 ***** 2025-09-23 07:45:28.571705 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.571710 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.571715 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.571720 | orchestrator | 2025-09-23 07:45:28.571724 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-23 07:45:28.571729 | orchestrator | Tuesday 23 September 2025 07:40:55 +0000 (0:00:01.013) 0:06:23.119 ***** 2025-09-23 07:45:28.571734 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.571739 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.571743 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.571748 | orchestrator | 2025-09-23 07:45:28.571753 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-23 07:45:28.571758 | orchestrator | Tuesday 23 September 2025 07:40:56 +0000 (0:00:00.734) 0:06:23.853 ***** 2025-09-23 07:45:28.571762 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.571767 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.571772 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.571776 | orchestrator | 2025-09-23 07:45:28.571781 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-23 07:45:28.571786 | orchestrator | Tuesday 23 September 2025 07:40:56 +0000 (0:00:00.744) 0:06:24.598 ***** 2025-09-23 07:45:28.571791 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.571795 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.571800 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.571805 | orchestrator | 2025-09-23 07:45:28.571810 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-23 07:45:28.571814 | orchestrator | Tuesday 23 September 2025 07:40:57 +0000 (0:00:00.304) 0:06:24.902 ***** 2025-09-23 07:45:28.571819 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.571824 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.571829 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.571833 | orchestrator | 2025-09-23 07:45:28.571838 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-23 07:45:28.571843 | orchestrator | Tuesday 23 September 2025 07:40:57 +0000 (0:00:00.578) 0:06:25.480 ***** 2025-09-23 07:45:28.571848 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.571853 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.571857 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.571862 | orchestrator | 2025-09-23 07:45:28.571869 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-23 07:45:28.571874 | orchestrator | Tuesday 23 September 2025 07:40:58 +0000 (0:00:00.374) 0:06:25.855 ***** 2025-09-23 07:45:28.571879 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.571884 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.571888 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.571893 | orchestrator | 2025-09-23 07:45:28.571898 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-23 07:45:28.571903 | orchestrator | Tuesday 23 September 2025 07:40:58 +0000 (0:00:00.717) 0:06:26.572 ***** 2025-09-23 07:45:28.571907 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.571912 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.571917 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.571921 | orchestrator | 2025-09-23 07:45:28.571926 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-23 07:45:28.571931 | orchestrator | Tuesday 23 September 2025 07:40:59 +0000 (0:00:00.671) 0:06:27.244 ***** 2025-09-23 07:45:28.571939 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.571944 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.571949 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.571953 | orchestrator | 2025-09-23 07:45:28.571958 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-23 07:45:28.571963 | orchestrator | Tuesday 23 September 2025 07:40:59 +0000 (0:00:00.569) 0:06:27.813 ***** 2025-09-23 07:45:28.571967 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.571972 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.571977 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.571982 | orchestrator | 2025-09-23 07:45:28.571986 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-23 07:45:28.571991 | orchestrator | Tuesday 23 September 2025 07:41:00 +0000 (0:00:00.336) 0:06:28.149 ***** 2025-09-23 07:45:28.571996 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.572001 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.572005 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.572010 | orchestrator | 2025-09-23 07:45:28.572015 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-23 07:45:28.572020 | orchestrator | Tuesday 23 September 2025 07:41:00 +0000 (0:00:00.330) 0:06:28.480 ***** 2025-09-23 07:45:28.572024 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.572029 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.572034 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.572038 | orchestrator | 2025-09-23 07:45:28.572043 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-23 07:45:28.572048 | orchestrator | Tuesday 23 September 2025 07:41:01 +0000 (0:00:00.350) 0:06:28.831 ***** 2025-09-23 07:45:28.572052 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.572057 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.572062 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.572066 | orchestrator | 2025-09-23 07:45:28.572073 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-23 07:45:28.572078 | orchestrator | Tuesday 23 September 2025 07:41:01 +0000 (0:00:00.583) 0:06:29.414 ***** 2025-09-23 07:45:28.572083 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.572088 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.572093 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.572097 | orchestrator | 2025-09-23 07:45:28.572102 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-23 07:45:28.572107 | orchestrator | Tuesday 23 September 2025 07:41:01 +0000 (0:00:00.318) 0:06:29.733 ***** 2025-09-23 07:45:28.572111 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.572116 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.572121 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.572126 | orchestrator | 2025-09-23 07:45:28.572130 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-23 07:45:28.572135 | orchestrator | Tuesday 23 September 2025 07:41:02 +0000 (0:00:00.303) 0:06:30.036 ***** 2025-09-23 07:45:28.572140 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.572144 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.572149 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.572154 | orchestrator | 2025-09-23 07:45:28.572158 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-23 07:45:28.572163 | orchestrator | Tuesday 23 September 2025 07:41:02 +0000 (0:00:00.308) 0:06:30.345 ***** 2025-09-23 07:45:28.572168 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.572172 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.572177 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.572182 | orchestrator | 2025-09-23 07:45:28.572187 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-23 07:45:28.572192 | orchestrator | Tuesday 23 September 2025 07:41:03 +0000 (0:00:00.618) 0:06:30.963 ***** 2025-09-23 07:45:28.572196 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.572201 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.572209 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.572214 | orchestrator | 2025-09-23 07:45:28.572219 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-09-23 07:45:28.572224 | orchestrator | Tuesday 23 September 2025 07:41:03 +0000 (0:00:00.582) 0:06:31.546 ***** 2025-09-23 07:45:28.572228 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.572233 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.572238 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.572242 | orchestrator | 2025-09-23 07:45:28.572247 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-09-23 07:45:28.572252 | orchestrator | Tuesday 23 September 2025 07:41:04 +0000 (0:00:00.349) 0:06:31.895 ***** 2025-09-23 07:45:28.572257 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-23 07:45:28.572261 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-23 07:45:28.572266 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-23 07:45:28.572271 | orchestrator | 2025-09-23 07:45:28.572276 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-09-23 07:45:28.572280 | orchestrator | Tuesday 23 September 2025 07:41:05 +0000 (0:00:01.012) 0:06:32.908 ***** 2025-09-23 07:45:28.572287 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:45:28.572292 | orchestrator | 2025-09-23 07:45:28.572297 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-09-23 07:45:28.572302 | orchestrator | Tuesday 23 September 2025 07:41:05 +0000 (0:00:00.886) 0:06:33.795 ***** 2025-09-23 07:45:28.572306 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.572311 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.572316 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.572320 | orchestrator | 2025-09-23 07:45:28.572325 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-09-23 07:45:28.572330 | orchestrator | Tuesday 23 September 2025 07:41:06 +0000 (0:00:00.317) 0:06:34.112 ***** 2025-09-23 07:45:28.572334 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.572339 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.572344 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.572348 | orchestrator | 2025-09-23 07:45:28.572353 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-09-23 07:45:28.572358 | orchestrator | Tuesday 23 September 2025 07:41:06 +0000 (0:00:00.307) 0:06:34.420 ***** 2025-09-23 07:45:28.572377 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.572382 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.572387 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.572391 | orchestrator | 2025-09-23 07:45:28.572396 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-09-23 07:45:28.572401 | orchestrator | Tuesday 23 September 2025 07:41:07 +0000 (0:00:00.947) 0:06:35.368 ***** 2025-09-23 07:45:28.572406 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.572411 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.572415 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.572420 | orchestrator | 2025-09-23 07:45:28.572425 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-09-23 07:45:28.572430 | orchestrator | Tuesday 23 September 2025 07:41:07 +0000 (0:00:00.373) 0:06:35.741 ***** 2025-09-23 07:45:28.572434 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-23 07:45:28.572439 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-23 07:45:28.572444 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-23 07:45:28.572449 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-23 07:45:28.572454 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-23 07:45:28.572462 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-23 07:45:28.572470 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-23 07:45:28.572475 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-23 07:45:28.572479 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-23 07:45:28.572484 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-23 07:45:28.572489 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-23 07:45:28.572494 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-23 07:45:28.572499 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-23 07:45:28.572504 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-23 07:45:28.572509 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-23 07:45:28.572514 | orchestrator | 2025-09-23 07:45:28.572518 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-09-23 07:45:28.572524 | orchestrator | Tuesday 23 September 2025 07:41:11 +0000 (0:00:03.156) 0:06:38.898 ***** 2025-09-23 07:45:28.572528 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.572533 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.572538 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.572543 | orchestrator | 2025-09-23 07:45:28.572548 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-09-23 07:45:28.572553 | orchestrator | Tuesday 23 September 2025 07:41:11 +0000 (0:00:00.329) 0:06:39.227 ***** 2025-09-23 07:45:28.572558 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:45:28.572563 | orchestrator | 2025-09-23 07:45:28.572567 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-09-23 07:45:28.572572 | orchestrator | Tuesday 23 September 2025 07:41:12 +0000 (0:00:00.950) 0:06:40.178 ***** 2025-09-23 07:45:28.572577 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-23 07:45:28.572582 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-23 07:45:28.572587 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-23 07:45:28.572591 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-09-23 07:45:28.572596 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-09-23 07:45:28.572601 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-09-23 07:45:28.572606 | orchestrator | 2025-09-23 07:45:28.572611 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-09-23 07:45:28.572615 | orchestrator | Tuesday 23 September 2025 07:41:13 +0000 (0:00:01.072) 0:06:41.250 ***** 2025-09-23 07:45:28.572620 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-23 07:45:28.572628 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-23 07:45:28.572633 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-23 07:45:28.572638 | orchestrator | 2025-09-23 07:45:28.572642 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-09-23 07:45:28.572647 | orchestrator | Tuesday 23 September 2025 07:41:15 +0000 (0:00:02.342) 0:06:43.592 ***** 2025-09-23 07:45:28.572652 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-23 07:45:28.572657 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-23 07:45:28.572662 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:45:28.572667 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-23 07:45:28.572672 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-23 07:45:28.572680 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:45:28.572685 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-23 07:45:28.572690 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-23 07:45:28.572695 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:45:28.572700 | orchestrator | 2025-09-23 07:45:28.572705 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-09-23 07:45:28.572709 | orchestrator | Tuesday 23 September 2025 07:41:16 +0000 (0:00:01.221) 0:06:44.814 ***** 2025-09-23 07:45:28.572714 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-23 07:45:28.572719 | orchestrator | 2025-09-23 07:45:28.572724 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-09-23 07:45:28.572729 | orchestrator | Tuesday 23 September 2025 07:41:19 +0000 (0:00:02.747) 0:06:47.562 ***** 2025-09-23 07:45:28.572734 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:45:28.572739 | orchestrator | 2025-09-23 07:45:28.572744 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-09-23 07:45:28.572748 | orchestrator | Tuesday 23 September 2025 07:41:20 +0000 (0:00:00.546) 0:06:48.108 ***** 2025-09-23 07:45:28.572753 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4a27826e-7697-5dae-8bcf-65313ee63b58', 'data_vg': 'ceph-4a27826e-7697-5dae-8bcf-65313ee63b58'}) 2025-09-23 07:45:28.572759 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7ede7e8c-1177-5738-bf30-f710eefa62dc', 'data_vg': 'ceph-7ede7e8c-1177-5738-bf30-f710eefa62dc'}) 2025-09-23 07:45:28.572764 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-fa3e03eb-2d2a-5719-835a-39fedcc9009f', 'data_vg': 'ceph-fa3e03eb-2d2a-5719-835a-39fedcc9009f'}) 2025-09-23 07:45:28.572771 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b31a677e-efd4-57fc-b4ad-0e2207d5fa48', 'data_vg': 'ceph-b31a677e-efd4-57fc-b4ad-0e2207d5fa48'}) 2025-09-23 07:45:28.572776 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6b345e42-d385-5c5d-ac31-471707d336a3', 'data_vg': 'ceph-6b345e42-d385-5c5d-ac31-471707d336a3'}) 2025-09-23 07:45:28.572781 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-0570cb7e-4d0f-57ea-8b12-da850e205fc7', 'data_vg': 'ceph-0570cb7e-4d0f-57ea-8b12-da850e205fc7'}) 2025-09-23 07:45:28.572786 | orchestrator | 2025-09-23 07:45:28.572791 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-09-23 07:45:28.572795 | orchestrator | Tuesday 23 September 2025 07:42:04 +0000 (0:00:44.185) 0:07:32.294 ***** 2025-09-23 07:45:28.572800 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.572805 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.572810 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.572815 | orchestrator | 2025-09-23 07:45:28.572820 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-09-23 07:45:28.572824 | orchestrator | Tuesday 23 September 2025 07:42:05 +0000 (0:00:00.556) 0:07:32.850 ***** 2025-09-23 07:45:28.572829 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:45:28.572834 | orchestrator | 2025-09-23 07:45:28.572839 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-09-23 07:45:28.572843 | orchestrator | Tuesday 23 September 2025 07:42:05 +0000 (0:00:00.587) 0:07:33.437 ***** 2025-09-23 07:45:28.572849 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.572853 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.572858 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.572863 | orchestrator | 2025-09-23 07:45:28.572868 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-09-23 07:45:28.572872 | orchestrator | Tuesday 23 September 2025 07:42:06 +0000 (0:00:00.623) 0:07:34.061 ***** 2025-09-23 07:45:28.572877 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.572882 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.572890 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.572895 | orchestrator | 2025-09-23 07:45:28.572900 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-09-23 07:45:28.572905 | orchestrator | Tuesday 23 September 2025 07:42:08 +0000 (0:00:02.667) 0:07:36.729 ***** 2025-09-23 07:45:28.572909 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:45:28.572914 | orchestrator | 2025-09-23 07:45:28.572919 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-09-23 07:45:28.572924 | orchestrator | Tuesday 23 September 2025 07:42:09 +0000 (0:00:00.607) 0:07:37.336 ***** 2025-09-23 07:45:28.572929 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:45:28.572933 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:45:28.572938 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:45:28.572943 | orchestrator | 2025-09-23 07:45:28.572948 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-09-23 07:45:28.572956 | orchestrator | Tuesday 23 September 2025 07:42:10 +0000 (0:00:01.177) 0:07:38.514 ***** 2025-09-23 07:45:28.572961 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:45:28.572966 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:45:28.572970 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:45:28.572975 | orchestrator | 2025-09-23 07:45:28.572980 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-09-23 07:45:28.572985 | orchestrator | Tuesday 23 September 2025 07:42:12 +0000 (0:00:01.526) 0:07:40.041 ***** 2025-09-23 07:45:28.572989 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:45:28.572994 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:45:28.572999 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:45:28.573004 | orchestrator | 2025-09-23 07:45:28.573009 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-09-23 07:45:28.573013 | orchestrator | Tuesday 23 September 2025 07:42:14 +0000 (0:00:01.854) 0:07:41.895 ***** 2025-09-23 07:45:28.573018 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.573023 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.573028 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.573032 | orchestrator | 2025-09-23 07:45:28.573037 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-09-23 07:45:28.573042 | orchestrator | Tuesday 23 September 2025 07:42:14 +0000 (0:00:00.330) 0:07:42.226 ***** 2025-09-23 07:45:28.573047 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.573051 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.573056 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.573061 | orchestrator | 2025-09-23 07:45:28.573066 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-09-23 07:45:28.573070 | orchestrator | Tuesday 23 September 2025 07:42:14 +0000 (0:00:00.326) 0:07:42.552 ***** 2025-09-23 07:45:28.573075 | orchestrator | ok: [testbed-node-3] => (item=5) 2025-09-23 07:45:28.573080 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-23 07:45:28.573085 | orchestrator | ok: [testbed-node-5] => (item=3) 2025-09-23 07:45:28.573090 | orchestrator | ok: [testbed-node-3] => (item=2) 2025-09-23 07:45:28.573094 | orchestrator | ok: [testbed-node-4] => (item=4) 2025-09-23 07:45:28.573099 | orchestrator | ok: [testbed-node-5] => (item=1) 2025-09-23 07:45:28.573104 | orchestrator | 2025-09-23 07:45:28.573109 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-09-23 07:45:28.573114 | orchestrator | Tuesday 23 September 2025 07:42:16 +0000 (0:00:01.303) 0:07:43.856 ***** 2025-09-23 07:45:28.573119 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-09-23 07:45:28.573123 | orchestrator | changed: [testbed-node-4] => (item=0) 2025-09-23 07:45:28.573128 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-09-23 07:45:28.573133 | orchestrator | changed: [testbed-node-3] => (item=2) 2025-09-23 07:45:28.573137 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-09-23 07:45:28.573142 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-09-23 07:45:28.573150 | orchestrator | 2025-09-23 07:45:28.573158 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-09-23 07:45:28.573163 | orchestrator | Tuesday 23 September 2025 07:42:18 +0000 (0:00:02.182) 0:07:46.038 ***** 2025-09-23 07:45:28.573167 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-09-23 07:45:28.573172 | orchestrator | changed: [testbed-node-4] => (item=0) 2025-09-23 07:45:28.573177 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-09-23 07:45:28.573182 | orchestrator | changed: [testbed-node-3] => (item=2) 2025-09-23 07:45:28.573186 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-09-23 07:45:28.573191 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-09-23 07:45:28.573196 | orchestrator | 2025-09-23 07:45:28.573201 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-09-23 07:45:28.573205 | orchestrator | Tuesday 23 September 2025 07:42:21 +0000 (0:00:03.478) 0:07:49.517 ***** 2025-09-23 07:45:28.573210 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.573215 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.573220 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-23 07:45:28.573224 | orchestrator | 2025-09-23 07:45:28.573229 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-09-23 07:45:28.573234 | orchestrator | Tuesday 23 September 2025 07:42:24 +0000 (0:00:02.583) 0:07:52.100 ***** 2025-09-23 07:45:28.573239 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.573244 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.573249 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-09-23 07:45:28.573253 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-23 07:45:28.573258 | orchestrator | 2025-09-23 07:45:28.573263 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-09-23 07:45:28.573268 | orchestrator | Tuesday 23 September 2025 07:42:37 +0000 (0:00:13.157) 0:08:05.258 ***** 2025-09-23 07:45:28.573272 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.573277 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.573282 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.573287 | orchestrator | 2025-09-23 07:45:28.573291 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-23 07:45:28.573296 | orchestrator | Tuesday 23 September 2025 07:42:38 +0000 (0:00:00.851) 0:08:06.110 ***** 2025-09-23 07:45:28.573301 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.573306 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.573310 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.573315 | orchestrator | 2025-09-23 07:45:28.573320 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-23 07:45:28.573325 | orchestrator | Tuesday 23 September 2025 07:42:38 +0000 (0:00:00.574) 0:08:06.685 ***** 2025-09-23 07:45:28.573330 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:45:28.573334 | orchestrator | 2025-09-23 07:45:28.573339 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-23 07:45:28.573344 | orchestrator | Tuesday 23 September 2025 07:42:39 +0000 (0:00:00.538) 0:08:07.223 ***** 2025-09-23 07:45:28.573349 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-23 07:45:28.573356 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-23 07:45:28.573361 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-23 07:45:28.573376 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.573381 | orchestrator | 2025-09-23 07:45:28.573386 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-23 07:45:28.573391 | orchestrator | Tuesday 23 September 2025 07:42:39 +0000 (0:00:00.394) 0:08:07.618 ***** 2025-09-23 07:45:28.573395 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.573403 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.573408 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.573413 | orchestrator | 2025-09-23 07:45:28.573418 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-23 07:45:28.573422 | orchestrator | Tuesday 23 September 2025 07:42:40 +0000 (0:00:00.337) 0:08:07.956 ***** 2025-09-23 07:45:28.573427 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.573432 | orchestrator | 2025-09-23 07:45:28.573437 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-23 07:45:28.573442 | orchestrator | Tuesday 23 September 2025 07:42:40 +0000 (0:00:00.261) 0:08:08.218 ***** 2025-09-23 07:45:28.573446 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.573451 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.573456 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.573460 | orchestrator | 2025-09-23 07:45:28.573465 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-23 07:45:28.573470 | orchestrator | Tuesday 23 September 2025 07:42:41 +0000 (0:00:00.808) 0:08:09.027 ***** 2025-09-23 07:45:28.573475 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.573479 | orchestrator | 2025-09-23 07:45:28.573484 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-23 07:45:28.573489 | orchestrator | Tuesday 23 September 2025 07:42:41 +0000 (0:00:00.268) 0:08:09.295 ***** 2025-09-23 07:45:28.573494 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.573498 | orchestrator | 2025-09-23 07:45:28.573503 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-23 07:45:28.573508 | orchestrator | Tuesday 23 September 2025 07:42:41 +0000 (0:00:00.243) 0:08:09.538 ***** 2025-09-23 07:45:28.573513 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.573518 | orchestrator | 2025-09-23 07:45:28.573522 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-23 07:45:28.573527 | orchestrator | Tuesday 23 September 2025 07:42:41 +0000 (0:00:00.155) 0:08:09.693 ***** 2025-09-23 07:45:28.573532 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.573537 | orchestrator | 2025-09-23 07:45:28.573541 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-23 07:45:28.573546 | orchestrator | Tuesday 23 September 2025 07:42:42 +0000 (0:00:00.233) 0:08:09.927 ***** 2025-09-23 07:45:28.573554 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.573559 | orchestrator | 2025-09-23 07:45:28.573564 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-23 07:45:28.573568 | orchestrator | Tuesday 23 September 2025 07:42:42 +0000 (0:00:00.238) 0:08:10.165 ***** 2025-09-23 07:45:28.573573 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-23 07:45:28.573578 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-23 07:45:28.573583 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-23 07:45:28.573587 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.573592 | orchestrator | 2025-09-23 07:45:28.573597 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-23 07:45:28.573602 | orchestrator | Tuesday 23 September 2025 07:42:42 +0000 (0:00:00.435) 0:08:10.601 ***** 2025-09-23 07:45:28.573606 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.573611 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.573616 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.573621 | orchestrator | 2025-09-23 07:45:28.573625 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-23 07:45:28.573630 | orchestrator | Tuesday 23 September 2025 07:42:43 +0000 (0:00:00.320) 0:08:10.921 ***** 2025-09-23 07:45:28.573635 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.573640 | orchestrator | 2025-09-23 07:45:28.573645 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-23 07:45:28.573649 | orchestrator | Tuesday 23 September 2025 07:42:43 +0000 (0:00:00.765) 0:08:11.686 ***** 2025-09-23 07:45:28.573657 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.573662 | orchestrator | 2025-09-23 07:45:28.573667 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-09-23 07:45:28.573671 | orchestrator | 2025-09-23 07:45:28.573676 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-23 07:45:28.573681 | orchestrator | Tuesday 23 September 2025 07:42:44 +0000 (0:00:00.672) 0:08:12.359 ***** 2025-09-23 07:45:28.573686 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:45:28.573691 | orchestrator | 2025-09-23 07:45:28.573696 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-23 07:45:28.573701 | orchestrator | Tuesday 23 September 2025 07:42:45 +0000 (0:00:01.215) 0:08:13.574 ***** 2025-09-23 07:45:28.573706 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:45:28.573711 | orchestrator | 2025-09-23 07:45:28.573715 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-23 07:45:28.573720 | orchestrator | Tuesday 23 September 2025 07:42:47 +0000 (0:00:01.281) 0:08:14.855 ***** 2025-09-23 07:45:28.573725 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.573730 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.573734 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.573739 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.573747 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.573752 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.573756 | orchestrator | 2025-09-23 07:45:28.573761 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-23 07:45:28.573766 | orchestrator | Tuesday 23 September 2025 07:42:48 +0000 (0:00:01.255) 0:08:16.111 ***** 2025-09-23 07:45:28.573771 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.573776 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.573780 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.573785 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.573790 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.573795 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.573800 | orchestrator | 2025-09-23 07:45:28.573804 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-23 07:45:28.573809 | orchestrator | Tuesday 23 September 2025 07:42:48 +0000 (0:00:00.680) 0:08:16.791 ***** 2025-09-23 07:45:28.573814 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.573819 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.573824 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.573828 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.573833 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.573838 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.573843 | orchestrator | 2025-09-23 07:45:28.573847 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-23 07:45:28.573852 | orchestrator | Tuesday 23 September 2025 07:42:49 +0000 (0:00:00.915) 0:08:17.706 ***** 2025-09-23 07:45:28.573857 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.573862 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.573867 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.573871 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.573876 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.573881 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.573886 | orchestrator | 2025-09-23 07:45:28.573891 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-23 07:45:28.573895 | orchestrator | Tuesday 23 September 2025 07:42:50 +0000 (0:00:00.726) 0:08:18.433 ***** 2025-09-23 07:45:28.573900 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.573905 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.573910 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.573921 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.573925 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.573930 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.573935 | orchestrator | 2025-09-23 07:45:28.573940 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-23 07:45:28.573945 | orchestrator | Tuesday 23 September 2025 07:42:51 +0000 (0:00:01.034) 0:08:19.467 ***** 2025-09-23 07:45:28.573950 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.573954 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.573959 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.573964 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.573969 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.573976 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.573981 | orchestrator | 2025-09-23 07:45:28.573986 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-23 07:45:28.573991 | orchestrator | Tuesday 23 September 2025 07:42:52 +0000 (0:00:00.921) 0:08:20.389 ***** 2025-09-23 07:45:28.573995 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.574000 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.574005 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.574010 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.574046 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.574053 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.574058 | orchestrator | 2025-09-23 07:45:28.574063 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-23 07:45:28.574068 | orchestrator | Tuesday 23 September 2025 07:42:53 +0000 (0:00:00.632) 0:08:21.022 ***** 2025-09-23 07:45:28.574072 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.574078 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.574083 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.574087 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.574092 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.574097 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.574102 | orchestrator | 2025-09-23 07:45:28.574107 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-23 07:45:28.574112 | orchestrator | Tuesday 23 September 2025 07:42:54 +0000 (0:00:01.286) 0:08:22.308 ***** 2025-09-23 07:45:28.574116 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.574121 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.574126 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.574130 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.574135 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.574140 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.574145 | orchestrator | 2025-09-23 07:45:28.574149 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-23 07:45:28.574154 | orchestrator | Tuesday 23 September 2025 07:42:55 +0000 (0:00:00.996) 0:08:23.304 ***** 2025-09-23 07:45:28.574159 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.574164 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.574169 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.574173 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.574178 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.574183 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.574188 | orchestrator | 2025-09-23 07:45:28.574192 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-23 07:45:28.574197 | orchestrator | Tuesday 23 September 2025 07:42:56 +0000 (0:00:00.863) 0:08:24.167 ***** 2025-09-23 07:45:28.574202 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.574207 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.574211 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.574216 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.574221 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.574226 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.574231 | orchestrator | 2025-09-23 07:45:28.574235 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-23 07:45:28.574244 | orchestrator | Tuesday 23 September 2025 07:42:56 +0000 (0:00:00.612) 0:08:24.780 ***** 2025-09-23 07:45:28.574249 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.574254 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.574258 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.574263 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.574271 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.574276 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.574281 | orchestrator | 2025-09-23 07:45:28.574286 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-23 07:45:28.574290 | orchestrator | Tuesday 23 September 2025 07:42:57 +0000 (0:00:00.870) 0:08:25.651 ***** 2025-09-23 07:45:28.574295 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.574300 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.574305 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.574310 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.574314 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.574319 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.574324 | orchestrator | 2025-09-23 07:45:28.574329 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-23 07:45:28.574333 | orchestrator | Tuesday 23 September 2025 07:42:58 +0000 (0:00:00.623) 0:08:26.275 ***** 2025-09-23 07:45:28.574338 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.574343 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.574348 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.574352 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.574357 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.574362 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.574392 | orchestrator | 2025-09-23 07:45:28.574397 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-23 07:45:28.574401 | orchestrator | Tuesday 23 September 2025 07:42:59 +0000 (0:00:00.845) 0:08:27.120 ***** 2025-09-23 07:45:28.574406 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.574411 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.574416 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.574421 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.574425 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.574430 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.574435 | orchestrator | 2025-09-23 07:45:28.574439 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-23 07:45:28.574444 | orchestrator | Tuesday 23 September 2025 07:42:59 +0000 (0:00:00.580) 0:08:27.700 ***** 2025-09-23 07:45:28.574448 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.574455 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.574463 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.574471 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:45:28.574478 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:45:28.574486 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:45:28.574494 | orchestrator | 2025-09-23 07:45:28.574501 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-23 07:45:28.574508 | orchestrator | Tuesday 23 September 2025 07:43:00 +0000 (0:00:00.905) 0:08:28.606 ***** 2025-09-23 07:45:28.574515 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.574523 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.574530 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.574543 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.574551 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.574560 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.574564 | orchestrator | 2025-09-23 07:45:28.574569 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-23 07:45:28.574573 | orchestrator | Tuesday 23 September 2025 07:43:01 +0000 (0:00:00.622) 0:08:29.228 ***** 2025-09-23 07:45:28.574578 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.574582 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.574591 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.574596 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.574600 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.574605 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.574609 | orchestrator | 2025-09-23 07:45:28.574614 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-23 07:45:28.574618 | orchestrator | Tuesday 23 September 2025 07:43:02 +0000 (0:00:00.848) 0:08:30.077 ***** 2025-09-23 07:45:28.574623 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.574627 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.574631 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.574636 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.574640 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.574645 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.574649 | orchestrator | 2025-09-23 07:45:28.574654 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-09-23 07:45:28.574658 | orchestrator | Tuesday 23 September 2025 07:43:03 +0000 (0:00:01.260) 0:08:31.337 ***** 2025-09-23 07:45:28.574663 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-23 07:45:28.574667 | orchestrator | 2025-09-23 07:45:28.574672 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-09-23 07:45:28.574676 | orchestrator | Tuesday 23 September 2025 07:43:07 +0000 (0:00:03.956) 0:08:35.293 ***** 2025-09-23 07:45:28.574681 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-23 07:45:28.574685 | orchestrator | 2025-09-23 07:45:28.574690 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-09-23 07:45:28.574694 | orchestrator | Tuesday 23 September 2025 07:43:09 +0000 (0:00:01.931) 0:08:37.225 ***** 2025-09-23 07:45:28.574698 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:45:28.574703 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:45:28.574707 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:45:28.574712 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.574716 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:45:28.574721 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:45:28.574725 | orchestrator | 2025-09-23 07:45:28.574730 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-09-23 07:45:28.574734 | orchestrator | Tuesday 23 September 2025 07:43:10 +0000 (0:00:01.480) 0:08:38.705 ***** 2025-09-23 07:45:28.574739 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:45:28.574743 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:45:28.574748 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:45:28.574752 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:45:28.574757 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:45:28.574761 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:45:28.574765 | orchestrator | 2025-09-23 07:45:28.574770 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-09-23 07:45:28.574774 | orchestrator | Tuesday 23 September 2025 07:43:12 +0000 (0:00:01.296) 0:08:40.002 ***** 2025-09-23 07:45:28.574782 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:45:28.574788 | orchestrator | 2025-09-23 07:45:28.574792 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-09-23 07:45:28.574797 | orchestrator | Tuesday 23 September 2025 07:43:13 +0000 (0:00:01.262) 0:08:41.265 ***** 2025-09-23 07:45:28.574801 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:45:28.574806 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:45:28.574810 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:45:28.574815 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:45:28.574819 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:45:28.574824 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:45:28.574828 | orchestrator | 2025-09-23 07:45:28.574833 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-09-23 07:45:28.574841 | orchestrator | Tuesday 23 September 2025 07:43:15 +0000 (0:00:01.644) 0:08:42.909 ***** 2025-09-23 07:45:28.574845 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:45:28.574850 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:45:28.574854 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:45:28.574859 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:45:28.574863 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:45:28.574867 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:45:28.574872 | orchestrator | 2025-09-23 07:45:28.574877 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-09-23 07:45:28.574881 | orchestrator | Tuesday 23 September 2025 07:43:18 +0000 (0:00:03.581) 0:08:46.491 ***** 2025-09-23 07:45:28.574886 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:45:28.574891 | orchestrator | 2025-09-23 07:45:28.574895 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-09-23 07:45:28.574900 | orchestrator | Tuesday 23 September 2025 07:43:19 +0000 (0:00:01.280) 0:08:47.771 ***** 2025-09-23 07:45:28.574904 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.574909 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.574913 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.574918 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.574922 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.574926 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.574931 | orchestrator | 2025-09-23 07:45:28.574935 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-09-23 07:45:28.574940 | orchestrator | Tuesday 23 September 2025 07:43:20 +0000 (0:00:00.731) 0:08:48.503 ***** 2025-09-23 07:45:28.574944 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:45:28.574949 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:45:28.574953 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:45:28.574961 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:45:28.574965 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:45:28.574970 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:45:28.574974 | orchestrator | 2025-09-23 07:45:28.574979 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-09-23 07:45:28.574983 | orchestrator | Tuesday 23 September 2025 07:43:23 +0000 (0:00:02.561) 0:08:51.064 ***** 2025-09-23 07:45:28.574988 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.574992 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.574997 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.575001 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:45:28.575006 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:45:28.575010 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:45:28.575015 | orchestrator | 2025-09-23 07:45:28.575019 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-09-23 07:45:28.575024 | orchestrator | 2025-09-23 07:45:28.575028 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-23 07:45:28.575033 | orchestrator | Tuesday 23 September 2025 07:43:24 +0000 (0:00:00.880) 0:08:51.945 ***** 2025-09-23 07:45:28.575037 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:45:28.575042 | orchestrator | 2025-09-23 07:45:28.575046 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-23 07:45:28.575051 | orchestrator | Tuesday 23 September 2025 07:43:24 +0000 (0:00:00.834) 0:08:52.780 ***** 2025-09-23 07:45:28.575055 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:45:28.575060 | orchestrator | 2025-09-23 07:45:28.575064 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-23 07:45:28.575069 | orchestrator | Tuesday 23 September 2025 07:43:25 +0000 (0:00:00.539) 0:08:53.319 ***** 2025-09-23 07:45:28.575073 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.575081 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.575086 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.575090 | orchestrator | 2025-09-23 07:45:28.575095 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-23 07:45:28.575099 | orchestrator | Tuesday 23 September 2025 07:43:26 +0000 (0:00:00.568) 0:08:53.888 ***** 2025-09-23 07:45:28.575104 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.575108 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.575113 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.575117 | orchestrator | 2025-09-23 07:45:28.575122 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-23 07:45:28.575126 | orchestrator | Tuesday 23 September 2025 07:43:26 +0000 (0:00:00.782) 0:08:54.670 ***** 2025-09-23 07:45:28.575131 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.575135 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.575140 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.575144 | orchestrator | 2025-09-23 07:45:28.575148 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-23 07:45:28.575153 | orchestrator | Tuesday 23 September 2025 07:43:27 +0000 (0:00:00.744) 0:08:55.415 ***** 2025-09-23 07:45:28.575157 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.575162 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.575166 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.575171 | orchestrator | 2025-09-23 07:45:28.575211 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-23 07:45:28.575216 | orchestrator | Tuesday 23 September 2025 07:43:28 +0000 (0:00:00.759) 0:08:56.175 ***** 2025-09-23 07:45:28.575221 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.575226 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.575230 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.575235 | orchestrator | 2025-09-23 07:45:28.575239 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-23 07:45:28.575244 | orchestrator | Tuesday 23 September 2025 07:43:29 +0000 (0:00:00.651) 0:08:56.827 ***** 2025-09-23 07:45:28.575248 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.575253 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.575257 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.575262 | orchestrator | 2025-09-23 07:45:28.575267 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-23 07:45:28.575271 | orchestrator | Tuesday 23 September 2025 07:43:29 +0000 (0:00:00.331) 0:08:57.158 ***** 2025-09-23 07:45:28.575276 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.575280 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.575285 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.575289 | orchestrator | 2025-09-23 07:45:28.575294 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-23 07:45:28.575299 | orchestrator | Tuesday 23 September 2025 07:43:29 +0000 (0:00:00.313) 0:08:57.471 ***** 2025-09-23 07:45:28.575303 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.575308 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.575312 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.575317 | orchestrator | 2025-09-23 07:45:28.575321 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-23 07:45:28.575326 | orchestrator | Tuesday 23 September 2025 07:43:30 +0000 (0:00:00.773) 0:08:58.244 ***** 2025-09-23 07:45:28.575331 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.575335 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.575340 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.575344 | orchestrator | 2025-09-23 07:45:28.575349 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-23 07:45:28.575353 | orchestrator | Tuesday 23 September 2025 07:43:31 +0000 (0:00:01.089) 0:08:59.334 ***** 2025-09-23 07:45:28.575358 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.575371 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.575376 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.575384 | orchestrator | 2025-09-23 07:45:28.575388 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-23 07:45:28.575393 | orchestrator | Tuesday 23 September 2025 07:43:31 +0000 (0:00:00.379) 0:08:59.714 ***** 2025-09-23 07:45:28.575397 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.575402 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.575407 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.575411 | orchestrator | 2025-09-23 07:45:28.575418 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-23 07:45:28.575423 | orchestrator | Tuesday 23 September 2025 07:43:32 +0000 (0:00:00.319) 0:09:00.034 ***** 2025-09-23 07:45:28.575428 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.575432 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.575437 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.575441 | orchestrator | 2025-09-23 07:45:28.575446 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-23 07:45:28.575450 | orchestrator | Tuesday 23 September 2025 07:43:32 +0000 (0:00:00.426) 0:09:00.461 ***** 2025-09-23 07:45:28.575455 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.575459 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.575464 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.575468 | orchestrator | 2025-09-23 07:45:28.575473 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-23 07:45:28.575477 | orchestrator | Tuesday 23 September 2025 07:43:33 +0000 (0:00:00.691) 0:09:01.152 ***** 2025-09-23 07:45:28.575482 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.575486 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.575490 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.575495 | orchestrator | 2025-09-23 07:45:28.575499 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-23 07:45:28.575504 | orchestrator | Tuesday 23 September 2025 07:43:33 +0000 (0:00:00.342) 0:09:01.495 ***** 2025-09-23 07:45:28.575508 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.575513 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.575517 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.575522 | orchestrator | 2025-09-23 07:45:28.575526 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-23 07:45:28.575531 | orchestrator | Tuesday 23 September 2025 07:43:34 +0000 (0:00:00.347) 0:09:01.842 ***** 2025-09-23 07:45:28.575535 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.575540 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.575544 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.575549 | orchestrator | 2025-09-23 07:45:28.575553 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-23 07:45:28.575558 | orchestrator | Tuesday 23 September 2025 07:43:34 +0000 (0:00:00.312) 0:09:02.155 ***** 2025-09-23 07:45:28.575562 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.575567 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.575571 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.575576 | orchestrator | 2025-09-23 07:45:28.575580 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-23 07:45:28.575585 | orchestrator | Tuesday 23 September 2025 07:43:34 +0000 (0:00:00.591) 0:09:02.747 ***** 2025-09-23 07:45:28.575589 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.575593 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.575598 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.575602 | orchestrator | 2025-09-23 07:45:28.575607 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-23 07:45:28.575612 | orchestrator | Tuesday 23 September 2025 07:43:35 +0000 (0:00:00.336) 0:09:03.084 ***** 2025-09-23 07:45:28.575616 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.575621 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.575625 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.575630 | orchestrator | 2025-09-23 07:45:28.575634 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-09-23 07:45:28.575645 | orchestrator | Tuesday 23 September 2025 07:43:35 +0000 (0:00:00.575) 0:09:03.660 ***** 2025-09-23 07:45:28.575650 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.575654 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.575659 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-09-23 07:45:28.575663 | orchestrator | 2025-09-23 07:45:28.575668 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-09-23 07:45:28.575672 | orchestrator | Tuesday 23 September 2025 07:43:36 +0000 (0:00:00.772) 0:09:04.432 ***** 2025-09-23 07:45:28.575677 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-23 07:45:28.575681 | orchestrator | 2025-09-23 07:45:28.575686 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-09-23 07:45:28.575690 | orchestrator | Tuesday 23 September 2025 07:43:38 +0000 (0:00:02.210) 0:09:06.642 ***** 2025-09-23 07:45:28.575696 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-09-23 07:45:28.575702 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.575707 | orchestrator | 2025-09-23 07:45:28.575711 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-09-23 07:45:28.575716 | orchestrator | Tuesday 23 September 2025 07:43:39 +0000 (0:00:00.224) 0:09:06.867 ***** 2025-09-23 07:45:28.575722 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-23 07:45:28.575732 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-23 07:45:28.575817 | orchestrator | 2025-09-23 07:45:28.575823 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-09-23 07:45:28.575827 | orchestrator | Tuesday 23 September 2025 07:43:47 +0000 (0:00:08.256) 0:09:15.124 ***** 2025-09-23 07:45:28.575832 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-23 07:45:28.575836 | orchestrator | 2025-09-23 07:45:28.575844 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-09-23 07:45:28.575849 | orchestrator | Tuesday 23 September 2025 07:43:50 +0000 (0:00:03.695) 0:09:18.820 ***** 2025-09-23 07:45:28.575853 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:45:28.575858 | orchestrator | 2025-09-23 07:45:28.575862 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-09-23 07:45:28.575867 | orchestrator | Tuesday 23 September 2025 07:43:51 +0000 (0:00:00.775) 0:09:19.596 ***** 2025-09-23 07:45:28.575871 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-23 07:45:28.575876 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-23 07:45:28.575880 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-23 07:45:28.575885 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-09-23 07:45:28.575889 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-09-23 07:45:28.575894 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-09-23 07:45:28.575898 | orchestrator | 2025-09-23 07:45:28.575903 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-09-23 07:45:28.575907 | orchestrator | Tuesday 23 September 2025 07:43:52 +0000 (0:00:01.183) 0:09:20.780 ***** 2025-09-23 07:45:28.575916 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-23 07:45:28.575920 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-23 07:45:28.575925 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-23 07:45:28.575929 | orchestrator | 2025-09-23 07:45:28.575934 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-09-23 07:45:28.575938 | orchestrator | Tuesday 23 September 2025 07:43:55 +0000 (0:00:02.234) 0:09:23.015 ***** 2025-09-23 07:45:28.575943 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-23 07:45:28.575947 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-23 07:45:28.575952 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:45:28.575956 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-23 07:45:28.575960 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-23 07:45:28.575965 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:45:28.575969 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-23 07:45:28.575974 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-23 07:45:28.575978 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:45:28.575983 | orchestrator | 2025-09-23 07:45:28.575987 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-09-23 07:45:28.575992 | orchestrator | Tuesday 23 September 2025 07:43:56 +0000 (0:00:01.255) 0:09:24.270 ***** 2025-09-23 07:45:28.575996 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:45:28.576001 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:45:28.576005 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:45:28.576010 | orchestrator | 2025-09-23 07:45:28.576014 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-09-23 07:45:28.576022 | orchestrator | Tuesday 23 September 2025 07:43:59 +0000 (0:00:02.763) 0:09:27.034 ***** 2025-09-23 07:45:28.576026 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.576031 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.576035 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.576040 | orchestrator | 2025-09-23 07:45:28.576044 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-09-23 07:45:28.576049 | orchestrator | Tuesday 23 September 2025 07:44:00 +0000 (0:00:00.896) 0:09:27.930 ***** 2025-09-23 07:45:28.576053 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:45:28.576058 | orchestrator | 2025-09-23 07:45:28.576062 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-09-23 07:45:28.576067 | orchestrator | Tuesday 23 September 2025 07:44:00 +0000 (0:00:00.612) 0:09:28.543 ***** 2025-09-23 07:45:28.576071 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:45:28.576076 | orchestrator | 2025-09-23 07:45:28.576080 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-09-23 07:45:28.576085 | orchestrator | Tuesday 23 September 2025 07:44:01 +0000 (0:00:00.741) 0:09:29.285 ***** 2025-09-23 07:45:28.576089 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:45:28.576093 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:45:28.576098 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:45:28.576102 | orchestrator | 2025-09-23 07:45:28.576107 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-09-23 07:45:28.576111 | orchestrator | Tuesday 23 September 2025 07:44:02 +0000 (0:00:01.271) 0:09:30.556 ***** 2025-09-23 07:45:28.576116 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:45:28.576120 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:45:28.576125 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:45:28.576129 | orchestrator | 2025-09-23 07:45:28.576134 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-09-23 07:45:28.576138 | orchestrator | Tuesday 23 September 2025 07:44:03 +0000 (0:00:01.143) 0:09:31.700 ***** 2025-09-23 07:45:28.576142 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:45:28.576151 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:45:28.576155 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:45:28.576160 | orchestrator | 2025-09-23 07:45:28.576164 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-09-23 07:45:28.576169 | orchestrator | Tuesday 23 September 2025 07:44:05 +0000 (0:00:01.753) 0:09:33.453 ***** 2025-09-23 07:45:28.576173 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:45:28.576178 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:45:28.576182 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:45:28.576187 | orchestrator | 2025-09-23 07:45:28.576194 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-09-23 07:45:28.576198 | orchestrator | Tuesday 23 September 2025 07:44:08 +0000 (0:00:02.733) 0:09:36.187 ***** 2025-09-23 07:45:28.576203 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.576207 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.576211 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.576216 | orchestrator | 2025-09-23 07:45:28.576220 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-23 07:45:28.576225 | orchestrator | Tuesday 23 September 2025 07:44:09 +0000 (0:00:01.322) 0:09:37.509 ***** 2025-09-23 07:45:28.576229 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:45:28.576234 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:45:28.576238 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:45:28.576243 | orchestrator | 2025-09-23 07:45:28.576247 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-23 07:45:28.576252 | orchestrator | Tuesday 23 September 2025 07:44:10 +0000 (0:00:01.013) 0:09:38.523 ***** 2025-09-23 07:45:28.576256 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:45:28.576261 | orchestrator | 2025-09-23 07:45:28.576265 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-23 07:45:28.576270 | orchestrator | Tuesday 23 September 2025 07:44:11 +0000 (0:00:00.566) 0:09:39.089 ***** 2025-09-23 07:45:28.576274 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.576278 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.576283 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.576287 | orchestrator | 2025-09-23 07:45:28.576292 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-23 07:45:28.576296 | orchestrator | Tuesday 23 September 2025 07:44:11 +0000 (0:00:00.322) 0:09:39.411 ***** 2025-09-23 07:45:28.576301 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:45:28.576305 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:45:28.576310 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:45:28.576315 | orchestrator | 2025-09-23 07:45:28.576319 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-23 07:45:28.576323 | orchestrator | Tuesday 23 September 2025 07:44:13 +0000 (0:00:01.560) 0:09:40.972 ***** 2025-09-23 07:45:28.576328 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-23 07:45:28.576332 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-23 07:45:28.576337 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-23 07:45:28.576341 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.576346 | orchestrator | 2025-09-23 07:45:28.576350 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-23 07:45:28.576355 | orchestrator | Tuesday 23 September 2025 07:44:13 +0000 (0:00:00.644) 0:09:41.617 ***** 2025-09-23 07:45:28.576359 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.576375 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.576379 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.576384 | orchestrator | 2025-09-23 07:45:28.576388 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-23 07:45:28.576393 | orchestrator | 2025-09-23 07:45:28.576397 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-23 07:45:28.576408 | orchestrator | Tuesday 23 September 2025 07:44:14 +0000 (0:00:00.566) 0:09:42.184 ***** 2025-09-23 07:45:28.576418 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:45:28.576423 | orchestrator | 2025-09-23 07:45:28.576427 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-23 07:45:28.576432 | orchestrator | Tuesday 23 September 2025 07:44:15 +0000 (0:00:00.752) 0:09:42.937 ***** 2025-09-23 07:45:28.576436 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:45:28.576441 | orchestrator | 2025-09-23 07:45:28.576445 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-23 07:45:28.576450 | orchestrator | Tuesday 23 September 2025 07:44:15 +0000 (0:00:00.513) 0:09:43.450 ***** 2025-09-23 07:45:28.576454 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.576459 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.576463 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.576468 | orchestrator | 2025-09-23 07:45:28.576472 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-23 07:45:28.576477 | orchestrator | Tuesday 23 September 2025 07:44:16 +0000 (0:00:00.532) 0:09:43.983 ***** 2025-09-23 07:45:28.576481 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.576486 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.576490 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.576494 | orchestrator | 2025-09-23 07:45:28.576499 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-23 07:45:28.576503 | orchestrator | Tuesday 23 September 2025 07:44:16 +0000 (0:00:00.710) 0:09:44.693 ***** 2025-09-23 07:45:28.576508 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.576512 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.576517 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.576521 | orchestrator | 2025-09-23 07:45:28.576526 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-23 07:45:28.576530 | orchestrator | Tuesday 23 September 2025 07:44:17 +0000 (0:00:00.720) 0:09:45.414 ***** 2025-09-23 07:45:28.576535 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.576539 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.576544 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.576548 | orchestrator | 2025-09-23 07:45:28.576553 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-23 07:45:28.576557 | orchestrator | Tuesday 23 September 2025 07:44:18 +0000 (0:00:00.738) 0:09:46.153 ***** 2025-09-23 07:45:28.576562 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.576566 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.576571 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.576575 | orchestrator | 2025-09-23 07:45:28.576580 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-23 07:45:28.576587 | orchestrator | Tuesday 23 September 2025 07:44:18 +0000 (0:00:00.592) 0:09:46.745 ***** 2025-09-23 07:45:28.576591 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.576596 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.576600 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.576605 | orchestrator | 2025-09-23 07:45:28.576609 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-23 07:45:28.576614 | orchestrator | Tuesday 23 September 2025 07:44:19 +0000 (0:00:00.325) 0:09:47.071 ***** 2025-09-23 07:45:28.576618 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.576623 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.576627 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.576632 | orchestrator | 2025-09-23 07:45:28.576636 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-23 07:45:28.576641 | orchestrator | Tuesday 23 September 2025 07:44:19 +0000 (0:00:00.299) 0:09:47.371 ***** 2025-09-23 07:45:28.576645 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.576653 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.576658 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.576662 | orchestrator | 2025-09-23 07:45:28.576667 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-23 07:45:28.576671 | orchestrator | Tuesday 23 September 2025 07:44:20 +0000 (0:00:00.761) 0:09:48.133 ***** 2025-09-23 07:45:28.576676 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.576680 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.576685 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.576689 | orchestrator | 2025-09-23 07:45:28.576693 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-23 07:45:28.576698 | orchestrator | Tuesday 23 September 2025 07:44:21 +0000 (0:00:00.967) 0:09:49.100 ***** 2025-09-23 07:45:28.576703 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.576707 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.576712 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.576716 | orchestrator | 2025-09-23 07:45:28.576721 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-23 07:45:28.576725 | orchestrator | Tuesday 23 September 2025 07:44:21 +0000 (0:00:00.313) 0:09:49.414 ***** 2025-09-23 07:45:28.576730 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.576734 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.576739 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.576743 | orchestrator | 2025-09-23 07:45:28.576748 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-23 07:45:28.576752 | orchestrator | Tuesday 23 September 2025 07:44:21 +0000 (0:00:00.299) 0:09:49.714 ***** 2025-09-23 07:45:28.576757 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.576761 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.576766 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.576770 | orchestrator | 2025-09-23 07:45:28.576775 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-23 07:45:28.576779 | orchestrator | Tuesday 23 September 2025 07:44:22 +0000 (0:00:00.330) 0:09:50.044 ***** 2025-09-23 07:45:28.576784 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.576788 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.576793 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.576797 | orchestrator | 2025-09-23 07:45:28.576802 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-23 07:45:28.576806 | orchestrator | Tuesday 23 September 2025 07:44:22 +0000 (0:00:00.615) 0:09:50.660 ***** 2025-09-23 07:45:28.576811 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.576818 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.576822 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.576826 | orchestrator | 2025-09-23 07:45:28.576831 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-23 07:45:28.576835 | orchestrator | Tuesday 23 September 2025 07:44:23 +0000 (0:00:00.371) 0:09:51.031 ***** 2025-09-23 07:45:28.576840 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.576844 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.576849 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.576853 | orchestrator | 2025-09-23 07:45:28.576858 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-23 07:45:28.576862 | orchestrator | Tuesday 23 September 2025 07:44:23 +0000 (0:00:00.322) 0:09:51.354 ***** 2025-09-23 07:45:28.576867 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.576871 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.576876 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.576880 | orchestrator | 2025-09-23 07:45:28.576885 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-23 07:45:28.576889 | orchestrator | Tuesday 23 September 2025 07:44:23 +0000 (0:00:00.364) 0:09:51.719 ***** 2025-09-23 07:45:28.576894 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.576898 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.576903 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.576911 | orchestrator | 2025-09-23 07:45:28.576916 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-23 07:45:28.576920 | orchestrator | Tuesday 23 September 2025 07:44:24 +0000 (0:00:00.573) 0:09:52.292 ***** 2025-09-23 07:45:28.576925 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.576929 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.576934 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.576938 | orchestrator | 2025-09-23 07:45:28.576943 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-23 07:45:28.576947 | orchestrator | Tuesday 23 September 2025 07:44:24 +0000 (0:00:00.356) 0:09:52.649 ***** 2025-09-23 07:45:28.576952 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.576956 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.576961 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.576965 | orchestrator | 2025-09-23 07:45:28.576970 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-09-23 07:45:28.576974 | orchestrator | Tuesday 23 September 2025 07:44:25 +0000 (0:00:00.576) 0:09:53.225 ***** 2025-09-23 07:45:28.576979 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:45:28.576983 | orchestrator | 2025-09-23 07:45:28.576988 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-23 07:45:28.576992 | orchestrator | Tuesday 23 September 2025 07:44:26 +0000 (0:00:00.798) 0:09:54.024 ***** 2025-09-23 07:45:28.576999 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-23 07:45:28.577003 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-23 07:45:28.577008 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-23 07:45:28.577013 | orchestrator | 2025-09-23 07:45:28.577017 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-23 07:45:28.577021 | orchestrator | Tuesday 23 September 2025 07:44:28 +0000 (0:00:02.258) 0:09:56.283 ***** 2025-09-23 07:45:28.577026 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-23 07:45:28.577030 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-23 07:45:28.577035 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:45:28.577039 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-23 07:45:28.577044 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-23 07:45:28.577048 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:45:28.577053 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-23 07:45:28.577057 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-23 07:45:28.577062 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:45:28.577066 | orchestrator | 2025-09-23 07:45:28.577071 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-09-23 07:45:28.577075 | orchestrator | Tuesday 23 September 2025 07:44:29 +0000 (0:00:01.259) 0:09:57.542 ***** 2025-09-23 07:45:28.577080 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.577084 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.577088 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.577093 | orchestrator | 2025-09-23 07:45:28.577097 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-09-23 07:45:28.577102 | orchestrator | Tuesday 23 September 2025 07:44:30 +0000 (0:00:00.311) 0:09:57.854 ***** 2025-09-23 07:45:28.577106 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:45:28.577111 | orchestrator | 2025-09-23 07:45:28.577115 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-09-23 07:45:28.577120 | orchestrator | Tuesday 23 September 2025 07:44:30 +0000 (0:00:00.779) 0:09:58.633 ***** 2025-09-23 07:45:28.577124 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-23 07:45:28.577129 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-23 07:45:28.577139 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-23 07:45:28.577143 | orchestrator | 2025-09-23 07:45:28.577148 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-09-23 07:45:28.577152 | orchestrator | Tuesday 23 September 2025 07:44:31 +0000 (0:00:00.803) 0:09:59.436 ***** 2025-09-23 07:45:28.577159 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-23 07:45:28.577164 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-23 07:45:28.577168 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-23 07:45:28.577173 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-23 07:45:28.577177 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-23 07:45:28.577182 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-23 07:45:28.577186 | orchestrator | 2025-09-23 07:45:28.577191 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-23 07:45:28.577195 | orchestrator | Tuesday 23 September 2025 07:44:36 +0000 (0:00:04.556) 0:10:03.993 ***** 2025-09-23 07:45:28.577199 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-23 07:45:28.577204 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-23 07:45:28.577208 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-23 07:45:28.577213 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-23 07:45:28.577217 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-23 07:45:28.577222 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-23 07:45:28.577226 | orchestrator | 2025-09-23 07:45:28.577231 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-23 07:45:28.577235 | orchestrator | Tuesday 23 September 2025 07:44:38 +0000 (0:00:02.784) 0:10:06.777 ***** 2025-09-23 07:45:28.577239 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-23 07:45:28.577244 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:45:28.577248 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-23 07:45:28.577253 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:45:28.577257 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-23 07:45:28.577262 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:45:28.577266 | orchestrator | 2025-09-23 07:45:28.577271 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-09-23 07:45:28.577275 | orchestrator | Tuesday 23 September 2025 07:44:40 +0000 (0:00:01.295) 0:10:08.072 ***** 2025-09-23 07:45:28.577282 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-09-23 07:45:28.577287 | orchestrator | 2025-09-23 07:45:28.577291 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-09-23 07:45:28.577296 | orchestrator | Tuesday 23 September 2025 07:44:40 +0000 (0:00:00.286) 0:10:08.359 ***** 2025-09-23 07:45:28.577300 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-23 07:45:28.577305 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-23 07:45:28.577310 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-23 07:45:28.577318 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-23 07:45:28.577323 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-23 07:45:28.577327 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.577332 | orchestrator | 2025-09-23 07:45:28.577336 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-09-23 07:45:28.577341 | orchestrator | Tuesday 23 September 2025 07:44:41 +0000 (0:00:00.581) 0:10:08.940 ***** 2025-09-23 07:45:28.577346 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-23 07:45:28.577350 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-23 07:45:28.577355 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-23 07:45:28.577359 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-23 07:45:28.577389 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-23 07:45:28.577394 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.577398 | orchestrator | 2025-09-23 07:45:28.577403 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-09-23 07:45:28.577407 | orchestrator | Tuesday 23 September 2025 07:44:41 +0000 (0:00:00.636) 0:10:09.577 ***** 2025-09-23 07:45:28.577412 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-23 07:45:28.577419 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-23 07:45:28.577424 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-23 07:45:28.577428 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-23 07:45:28.577433 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-23 07:45:28.577437 | orchestrator | 2025-09-23 07:45:28.577442 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-09-23 07:45:28.577446 | orchestrator | Tuesday 23 September 2025 07:45:12 +0000 (0:00:31.036) 0:10:40.614 ***** 2025-09-23 07:45:28.577451 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.577455 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.577460 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.577464 | orchestrator | 2025-09-23 07:45:28.577469 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-09-23 07:45:28.577473 | orchestrator | Tuesday 23 September 2025 07:45:13 +0000 (0:00:00.332) 0:10:40.946 ***** 2025-09-23 07:45:28.577478 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.577482 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.577487 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.577491 | orchestrator | 2025-09-23 07:45:28.577495 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-09-23 07:45:28.577500 | orchestrator | Tuesday 23 September 2025 07:45:13 +0000 (0:00:00.617) 0:10:41.564 ***** 2025-09-23 07:45:28.577504 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:45:28.577513 | orchestrator | 2025-09-23 07:45:28.577517 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-09-23 07:45:28.577522 | orchestrator | Tuesday 23 September 2025 07:45:14 +0000 (0:00:00.543) 0:10:42.108 ***** 2025-09-23 07:45:28.577526 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:45:28.577531 | orchestrator | 2025-09-23 07:45:28.577535 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-09-23 07:45:28.577540 | orchestrator | Tuesday 23 September 2025 07:45:15 +0000 (0:00:00.781) 0:10:42.890 ***** 2025-09-23 07:45:28.577547 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:45:28.577551 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:45:28.577556 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:45:28.577560 | orchestrator | 2025-09-23 07:45:28.577565 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-09-23 07:45:28.577569 | orchestrator | Tuesday 23 September 2025 07:45:16 +0000 (0:00:01.246) 0:10:44.136 ***** 2025-09-23 07:45:28.577574 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:45:28.577578 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:45:28.577583 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:45:28.577587 | orchestrator | 2025-09-23 07:45:28.577592 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-09-23 07:45:28.577596 | orchestrator | Tuesday 23 September 2025 07:45:17 +0000 (0:00:01.192) 0:10:45.329 ***** 2025-09-23 07:45:28.577601 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:45:28.577605 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:45:28.577609 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:45:28.577614 | orchestrator | 2025-09-23 07:45:28.577618 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-09-23 07:45:28.577623 | orchestrator | Tuesday 23 September 2025 07:45:19 +0000 (0:00:01.717) 0:10:47.047 ***** 2025-09-23 07:45:28.577627 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-23 07:45:28.577632 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-23 07:45:28.577636 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-23 07:45:28.577641 | orchestrator | 2025-09-23 07:45:28.577645 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-23 07:45:28.577650 | orchestrator | Tuesday 23 September 2025 07:45:22 +0000 (0:00:03.634) 0:10:50.681 ***** 2025-09-23 07:45:28.577654 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.577659 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.577663 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.577668 | orchestrator | 2025-09-23 07:45:28.577672 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-23 07:45:28.577677 | orchestrator | Tuesday 23 September 2025 07:45:23 +0000 (0:00:00.373) 0:10:51.055 ***** 2025-09-23 07:45:28.577681 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:45:28.577686 | orchestrator | 2025-09-23 07:45:28.577690 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-23 07:45:28.577694 | orchestrator | Tuesday 23 September 2025 07:45:24 +0000 (0:00:00.836) 0:10:51.891 ***** 2025-09-23 07:45:28.577699 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.577703 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.577708 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.577712 | orchestrator | 2025-09-23 07:45:28.577717 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-23 07:45:28.577724 | orchestrator | Tuesday 23 September 2025 07:45:24 +0000 (0:00:00.322) 0:10:52.213 ***** 2025-09-23 07:45:28.577728 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.577735 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:45:28.577740 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:45:28.577744 | orchestrator | 2025-09-23 07:45:28.577749 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-23 07:45:28.577753 | orchestrator | Tuesday 23 September 2025 07:45:24 +0000 (0:00:00.348) 0:10:52.563 ***** 2025-09-23 07:45:28.577758 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-23 07:45:28.577762 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-23 07:45:28.577766 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-23 07:45:28.577771 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:45:28.577775 | orchestrator | 2025-09-23 07:45:28.577780 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-23 07:45:28.577784 | orchestrator | Tuesday 23 September 2025 07:45:25 +0000 (0:00:01.151) 0:10:53.714 ***** 2025-09-23 07:45:28.577789 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:45:28.577793 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:45:28.577797 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:45:28.577802 | orchestrator | 2025-09-23 07:45:28.577806 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:45:28.577811 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2025-09-23 07:45:28.577815 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-09-23 07:45:28.577820 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-09-23 07:45:28.577824 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2025-09-23 07:45:28.577829 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-09-23 07:45:28.577833 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-09-23 07:45:28.577838 | orchestrator | 2025-09-23 07:45:28.577842 | orchestrator | 2025-09-23 07:45:28.577847 | orchestrator | 2025-09-23 07:45:28.577854 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:45:28.577859 | orchestrator | Tuesday 23 September 2025 07:45:26 +0000 (0:00:00.255) 0:10:53.969 ***** 2025-09-23 07:45:28.577863 | orchestrator | =============================================================================== 2025-09-23 07:45:28.577868 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 45.23s 2025-09-23 07:45:28.577872 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 44.19s 2025-09-23 07:45:28.577877 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.04s 2025-09-23 07:45:28.577881 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.88s 2025-09-23 07:45:28.577885 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.87s 2025-09-23 07:45:28.577890 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.52s 2025-09-23 07:45:28.577894 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 13.16s 2025-09-23 07:45:28.577899 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 11.30s 2025-09-23 07:45:28.577903 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.83s 2025-09-23 07:45:28.577908 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.26s 2025-09-23 07:45:28.577912 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.06s 2025-09-23 07:45:28.577920 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.81s 2025-09-23 07:45:28.577925 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.79s 2025-09-23 07:45:28.577929 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.56s 2025-09-23 07:45:28.577933 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.96s 2025-09-23 07:45:28.577937 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.85s 2025-09-23 07:45:28.577941 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.70s 2025-09-23 07:45:28.577945 | orchestrator | ceph-rgw : Systemd start rgw container ---------------------------------- 3.63s 2025-09-23 07:45:28.577949 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.58s 2025-09-23 07:45:28.577953 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.48s 2025-09-23 07:45:28.577957 | orchestrator | 2025-09-23 07:45:28 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:45:28.577961 | orchestrator | 2025-09-23 07:45:28 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:45:28.577965 | orchestrator | 2025-09-23 07:45:28 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:45:28.577972 | orchestrator | 2025-09-23 07:45:28 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:45:31.607916 | orchestrator | 2025-09-23 07:45:31 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:45:31.609579 | orchestrator | 2025-09-23 07:45:31 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:45:31.611675 | orchestrator | 2025-09-23 07:45:31 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:45:31.611886 | orchestrator | 2025-09-23 07:45:31 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:45:34.653786 | orchestrator | 2025-09-23 07:45:34 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:45:34.655000 | orchestrator | 2025-09-23 07:45:34 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:45:34.655054 | orchestrator | 2025-09-23 07:45:34 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:45:34.655078 | orchestrator | 2025-09-23 07:45:34 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:45:37.702908 | orchestrator | 2025-09-23 07:45:37 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:45:37.705748 | orchestrator | 2025-09-23 07:45:37 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:45:37.707316 | orchestrator | 2025-09-23 07:45:37 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:45:37.707419 | orchestrator | 2025-09-23 07:45:37 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:45:40.753735 | orchestrator | 2025-09-23 07:45:40 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:45:40.756400 | orchestrator | 2025-09-23 07:45:40 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:45:40.758096 | orchestrator | 2025-09-23 07:45:40 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:45:40.758209 | orchestrator | 2025-09-23 07:45:40 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:45:43.804446 | orchestrator | 2025-09-23 07:45:43 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:45:43.804536 | orchestrator | 2025-09-23 07:45:43 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:45:43.805423 | orchestrator | 2025-09-23 07:45:43 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:45:43.805446 | orchestrator | 2025-09-23 07:45:43 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:45:46.845067 | orchestrator | 2025-09-23 07:45:46 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:45:46.845322 | orchestrator | 2025-09-23 07:45:46 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:45:46.847383 | orchestrator | 2025-09-23 07:45:46 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:45:46.847422 | orchestrator | 2025-09-23 07:45:46 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:45:49.897216 | orchestrator | 2025-09-23 07:45:49 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:45:49.898552 | orchestrator | 2025-09-23 07:45:49 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:45:49.900013 | orchestrator | 2025-09-23 07:45:49 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:45:49.900164 | orchestrator | 2025-09-23 07:45:49 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:45:52.945230 | orchestrator | 2025-09-23 07:45:52 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:45:52.947248 | orchestrator | 2025-09-23 07:45:52 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:45:52.948766 | orchestrator | 2025-09-23 07:45:52 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:45:52.948822 | orchestrator | 2025-09-23 07:45:52 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:45:55.999540 | orchestrator | 2025-09-23 07:45:55 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:45:56.001409 | orchestrator | 2025-09-23 07:45:55 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:45:56.003499 | orchestrator | 2025-09-23 07:45:56 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:45:56.003542 | orchestrator | 2025-09-23 07:45:56 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:45:59.053890 | orchestrator | 2025-09-23 07:45:59 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:45:59.055704 | orchestrator | 2025-09-23 07:45:59 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:45:59.058124 | orchestrator | 2025-09-23 07:45:59 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:45:59.058151 | orchestrator | 2025-09-23 07:45:59 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:46:02.096659 | orchestrator | 2025-09-23 07:46:02 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:46:02.097665 | orchestrator | 2025-09-23 07:46:02 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:46:02.098593 | orchestrator | 2025-09-23 07:46:02 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:46:02.098622 | orchestrator | 2025-09-23 07:46:02 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:46:05.150726 | orchestrator | 2025-09-23 07:46:05 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:46:05.152274 | orchestrator | 2025-09-23 07:46:05 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:46:05.154519 | orchestrator | 2025-09-23 07:46:05 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:46:05.155206 | orchestrator | 2025-09-23 07:46:05 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:46:08.200662 | orchestrator | 2025-09-23 07:46:08 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:46:08.203122 | orchestrator | 2025-09-23 07:46:08 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:46:08.204505 | orchestrator | 2025-09-23 07:46:08 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:46:08.204526 | orchestrator | 2025-09-23 07:46:08 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:46:11.255024 | orchestrator | 2025-09-23 07:46:11 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:46:11.257072 | orchestrator | 2025-09-23 07:46:11 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:46:11.258241 | orchestrator | 2025-09-23 07:46:11 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:46:11.258266 | orchestrator | 2025-09-23 07:46:11 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:46:14.307506 | orchestrator | 2025-09-23 07:46:14 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state STARTED 2025-09-23 07:46:14.309159 | orchestrator | 2025-09-23 07:46:14 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:46:14.311595 | orchestrator | 2025-09-23 07:46:14 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:46:14.311627 | orchestrator | 2025-09-23 07:46:14 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:46:17.364782 | orchestrator | 2025-09-23 07:46:17 | INFO  | Task c480cb5f-1286-494a-a785-3cfb27435b6a is in state SUCCESS 2025-09-23 07:46:17.366261 | orchestrator | 2025-09-23 07:46:17.366295 | orchestrator | 2025-09-23 07:46:17.366305 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-23 07:46:17.366313 | orchestrator | 2025-09-23 07:46:17.366334 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-23 07:46:17.366342 | orchestrator | Tuesday 23 September 2025 07:43:20 +0000 (0:00:00.265) 0:00:00.265 ***** 2025-09-23 07:46:17.366350 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:46:17.366358 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:46:17.366366 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:46:17.366373 | orchestrator | 2025-09-23 07:46:17.366380 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-23 07:46:17.366387 | orchestrator | Tuesday 23 September 2025 07:43:20 +0000 (0:00:00.348) 0:00:00.613 ***** 2025-09-23 07:46:17.366395 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-09-23 07:46:17.366402 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-09-23 07:46:17.366409 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-09-23 07:46:17.366416 | orchestrator | 2025-09-23 07:46:17.366423 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-09-23 07:46:17.366433 | orchestrator | 2025-09-23 07:46:17.366441 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-23 07:46:17.366449 | orchestrator | Tuesday 23 September 2025 07:43:20 +0000 (0:00:00.433) 0:00:01.047 ***** 2025-09-23 07:46:17.366457 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:46:17.366464 | orchestrator | 2025-09-23 07:46:17.366471 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-09-23 07:46:17.366485 | orchestrator | Tuesday 23 September 2025 07:43:21 +0000 (0:00:00.503) 0:00:01.551 ***** 2025-09-23 07:46:17.366493 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-23 07:46:17.366515 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-23 07:46:17.366523 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-23 07:46:17.366530 | orchestrator | 2025-09-23 07:46:17.366537 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-09-23 07:46:17.366561 | orchestrator | Tuesday 23 September 2025 07:43:22 +0000 (0:00:00.699) 0:00:02.250 ***** 2025-09-23 07:46:17.366572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-23 07:46:17.366582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-23 07:46:17.366599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-23 07:46:17.366608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-23 07:46:17.366627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-23 07:46:17.366669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-23 07:46:17.366676 | orchestrator | 2025-09-23 07:46:17.366681 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-23 07:46:17.366686 | orchestrator | Tuesday 23 September 2025 07:43:23 +0000 (0:00:01.808) 0:00:04.058 ***** 2025-09-23 07:46:17.366690 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:46:17.366694 | orchestrator | 2025-09-23 07:46:17.366698 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-09-23 07:46:17.366702 | orchestrator | Tuesday 23 September 2025 07:43:24 +0000 (0:00:00.527) 0:00:04.586 ***** 2025-09-23 07:46:17.366712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-23 07:46:17.366719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-23 07:46:17.366727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-23 07:46:17.366732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-23 07:46:17.366739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-23 07:46:17.366744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-23 07:46:17.366752 | orchestrator | 2025-09-23 07:46:17.366756 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-09-23 07:46:17.366760 | orchestrator | Tuesday 23 September 2025 07:43:27 +0000 (0:00:02.926) 0:00:07.512 ***** 2025-09-23 07:46:17.366765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-23 07:46:17.366769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-23 07:46:17.366774 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:46:17.366851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-23 07:46:17.366868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-23 07:46:17.366879 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:46:17.366886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-23 07:46:17.366892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-23 07:46:17.366897 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:46:17.366902 | orchestrator | 2025-09-23 07:46:17.366906 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-09-23 07:46:17.366912 | orchestrator | Tuesday 23 September 2025 07:43:28 +0000 (0:00:00.951) 0:00:08.463 ***** 2025-09-23 07:46:17.366917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-23 07:46:17.366925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-23 07:46:17.366933 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:46:17.366941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-23 07:46:17.366946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-23 07:46:17.366951 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:46:17.366956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-23 07:46:17.366965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-23 07:46:17.366975 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:46:17.366980 | orchestrator | 2025-09-23 07:46:17.366984 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-09-23 07:46:17.366989 | orchestrator | Tuesday 23 September 2025 07:43:29 +0000 (0:00:01.245) 0:00:09.709 ***** 2025-09-23 07:46:17.366996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-23 07:46:17.367001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-23 07:46:17.367007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-23 07:46:17.367015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-23 07:46:17.367025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-23 07:46:17.367031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-23 07:46:17.367036 | orchestrator | 2025-09-23 07:46:17.367041 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-09-23 07:46:17.367046 | orchestrator | Tuesday 23 September 2025 07:43:31 +0000 (0:00:02.420) 0:00:12.129 ***** 2025-09-23 07:46:17.367050 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:46:17.367055 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:46:17.367060 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:46:17.367064 | orchestrator | 2025-09-23 07:46:17.367069 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-09-23 07:46:17.367074 | orchestrator | Tuesday 23 September 2025 07:43:35 +0000 (0:00:03.336) 0:00:15.465 ***** 2025-09-23 07:46:17.367079 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:46:17.367083 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:46:17.367088 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:46:17.367093 | orchestrator | 2025-09-23 07:46:17.367098 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-09-23 07:46:17.367102 | orchestrator | Tuesday 23 September 2025 07:43:37 +0000 (0:00:02.235) 0:00:17.701 ***** 2025-09-23 07:46:17.367107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-23 07:46:17.367118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-23 07:46:17.367126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-23 07:46:17.367131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-23 07:46:17.367136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-23 07:46:17.367147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-23 07:46:17.367152 | orchestrator | 2025-09-23 07:46:17.367157 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-23 07:46:17.367163 | orchestrator | Tuesday 23 September 2025 07:43:39 +0000 (0:00:02.001) 0:00:19.702 ***** 2025-09-23 07:46:17.367170 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:46:17.367176 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:46:17.367184 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:46:17.367190 | orchestrator | 2025-09-23 07:46:17.367197 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-23 07:46:17.367204 | orchestrator | Tuesday 23 September 2025 07:43:39 +0000 (0:00:00.301) 0:00:20.004 ***** 2025-09-23 07:46:17.367211 | orchestrator | 2025-09-23 07:46:17.367218 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-23 07:46:17.367229 | orchestrator | Tuesday 23 September 2025 07:43:39 +0000 (0:00:00.062) 0:00:20.067 ***** 2025-09-23 07:46:17.367236 | orchestrator | 2025-09-23 07:46:17.367243 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-23 07:46:17.367249 | orchestrator | Tuesday 23 September 2025 07:43:39 +0000 (0:00:00.065) 0:00:20.132 ***** 2025-09-23 07:46:17.367256 | orchestrator | 2025-09-23 07:46:17.367262 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-09-23 07:46:17.367269 | orchestrator | Tuesday 23 September 2025 07:43:39 +0000 (0:00:00.064) 0:00:20.197 ***** 2025-09-23 07:46:17.367276 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:46:17.367283 | orchestrator | 2025-09-23 07:46:17.367290 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-09-23 07:46:17.367296 | orchestrator | Tuesday 23 September 2025 07:43:40 +0000 (0:00:00.215) 0:00:20.413 ***** 2025-09-23 07:46:17.367303 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:46:17.367309 | orchestrator | 2025-09-23 07:46:17.367316 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-09-23 07:46:17.367342 | orchestrator | Tuesday 23 September 2025 07:43:40 +0000 (0:00:00.637) 0:00:21.051 ***** 2025-09-23 07:46:17.367349 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:46:17.367356 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:46:17.367362 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:46:17.367369 | orchestrator | 2025-09-23 07:46:17.367376 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-09-23 07:46:17.367389 | orchestrator | Tuesday 23 September 2025 07:44:45 +0000 (0:01:04.964) 0:01:26.015 ***** 2025-09-23 07:46:17.367396 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:46:17.367403 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:46:17.367409 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:46:17.367416 | orchestrator | 2025-09-23 07:46:17.367423 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-23 07:46:17.367430 | orchestrator | Tuesday 23 September 2025 07:46:05 +0000 (0:01:19.625) 0:02:45.640 ***** 2025-09-23 07:46:17.367437 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:46:17.367445 | orchestrator | 2025-09-23 07:46:17.367452 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-09-23 07:46:17.367459 | orchestrator | Tuesday 23 September 2025 07:46:05 +0000 (0:00:00.540) 0:02:46.181 ***** 2025-09-23 07:46:17.367466 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:46:17.367473 | orchestrator | 2025-09-23 07:46:17.367480 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-09-23 07:46:17.367487 | orchestrator | Tuesday 23 September 2025 07:46:08 +0000 (0:00:02.650) 0:02:48.832 ***** 2025-09-23 07:46:17.367493 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:46:17.367499 | orchestrator | 2025-09-23 07:46:17.367505 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-09-23 07:46:17.367512 | orchestrator | Tuesday 23 September 2025 07:46:10 +0000 (0:00:02.166) 0:02:50.999 ***** 2025-09-23 07:46:17.367518 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:46:17.367525 | orchestrator | 2025-09-23 07:46:17.367531 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-09-23 07:46:17.367538 | orchestrator | Tuesday 23 September 2025 07:46:13 +0000 (0:00:02.906) 0:02:53.905 ***** 2025-09-23 07:46:17.367546 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:46:17.367612 | orchestrator | 2025-09-23 07:46:17.367619 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:46:17.367624 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-23 07:46:17.367629 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-23 07:46:17.367634 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-23 07:46:17.367638 | orchestrator | 2025-09-23 07:46:17.367642 | orchestrator | 2025-09-23 07:46:17.367646 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:46:17.367656 | orchestrator | Tuesday 23 September 2025 07:46:16 +0000 (0:00:02.552) 0:02:56.457 ***** 2025-09-23 07:46:17.367660 | orchestrator | =============================================================================== 2025-09-23 07:46:17.367664 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 79.63s 2025-09-23 07:46:17.367668 | orchestrator | opensearch : Restart opensearch container ------------------------------ 64.96s 2025-09-23 07:46:17.367672 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.34s 2025-09-23 07:46:17.367676 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.93s 2025-09-23 07:46:17.367680 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.91s 2025-09-23 07:46:17.367685 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.65s 2025-09-23 07:46:17.367689 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.55s 2025-09-23 07:46:17.367722 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.42s 2025-09-23 07:46:17.367726 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.24s 2025-09-23 07:46:17.367730 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.17s 2025-09-23 07:46:17.367741 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.00s 2025-09-23 07:46:17.367745 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.81s 2025-09-23 07:46:17.367752 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.25s 2025-09-23 07:46:17.367757 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.95s 2025-09-23 07:46:17.367761 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.70s 2025-09-23 07:46:17.367765 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.64s 2025-09-23 07:46:17.367769 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.54s 2025-09-23 07:46:17.367773 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2025-09-23 07:46:17.367777 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.50s 2025-09-23 07:46:17.367781 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2025-09-23 07:46:17.367785 | orchestrator | 2025-09-23 07:46:17 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:46:17.367790 | orchestrator | 2025-09-23 07:46:17 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:46:17.367794 | orchestrator | 2025-09-23 07:46:17 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:46:20.413031 | orchestrator | 2025-09-23 07:46:20 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:46:20.415305 | orchestrator | 2025-09-23 07:46:20 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:46:20.415699 | orchestrator | 2025-09-23 07:46:20 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:46:23.448375 | orchestrator | 2025-09-23 07:46:23 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:46:23.450835 | orchestrator | 2025-09-23 07:46:23 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:46:23.450872 | orchestrator | 2025-09-23 07:46:23 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:46:26.497255 | orchestrator | 2025-09-23 07:46:26 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:46:26.499370 | orchestrator | 2025-09-23 07:46:26 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:46:26.499430 | orchestrator | 2025-09-23 07:46:26 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:46:29.543735 | orchestrator | 2025-09-23 07:46:29 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:46:29.545097 | orchestrator | 2025-09-23 07:46:29 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:46:29.545128 | orchestrator | 2025-09-23 07:46:29 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:46:32.588634 | orchestrator | 2025-09-23 07:46:32 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state STARTED 2025-09-23 07:46:32.590241 | orchestrator | 2025-09-23 07:46:32 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:46:32.590421 | orchestrator | 2025-09-23 07:46:32 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:46:35.632002 | orchestrator | 2025-09-23 07:46:35 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:46:35.637395 | orchestrator | 2025-09-23 07:46:35 | INFO  | Task aaf20b18-7aa5-4767-9ab3-495d2cc0395e is in state SUCCESS 2025-09-23 07:46:35.639236 | orchestrator | 2025-09-23 07:46:35.639639 | orchestrator | 2025-09-23 07:46:35.639673 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-09-23 07:46:35.639710 | orchestrator | 2025-09-23 07:46:35.639722 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-23 07:46:35.639734 | orchestrator | Tuesday 23 September 2025 07:43:19 +0000 (0:00:00.103) 0:00:00.103 ***** 2025-09-23 07:46:35.639745 | orchestrator | ok: [localhost] => { 2025-09-23 07:46:35.639757 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-09-23 07:46:35.639768 | orchestrator | } 2025-09-23 07:46:35.639779 | orchestrator | 2025-09-23 07:46:35.639790 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-09-23 07:46:35.639801 | orchestrator | Tuesday 23 September 2025 07:43:19 +0000 (0:00:00.054) 0:00:00.158 ***** 2025-09-23 07:46:35.639812 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-09-23 07:46:35.639824 | orchestrator | ...ignoring 2025-09-23 07:46:35.639834 | orchestrator | 2025-09-23 07:46:35.639845 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-09-23 07:46:35.639856 | orchestrator | Tuesday 23 September 2025 07:43:22 +0000 (0:00:02.972) 0:00:03.130 ***** 2025-09-23 07:46:35.639866 | orchestrator | skipping: [localhost] 2025-09-23 07:46:35.639877 | orchestrator | 2025-09-23 07:46:35.639888 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-09-23 07:46:35.639899 | orchestrator | Tuesday 23 September 2025 07:43:22 +0000 (0:00:00.045) 0:00:03.176 ***** 2025-09-23 07:46:35.639910 | orchestrator | ok: [localhost] 2025-09-23 07:46:35.639920 | orchestrator | 2025-09-23 07:46:35.639931 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-23 07:46:35.639941 | orchestrator | 2025-09-23 07:46:35.639964 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-23 07:46:35.639975 | orchestrator | Tuesday 23 September 2025 07:43:23 +0000 (0:00:00.180) 0:00:03.357 ***** 2025-09-23 07:46:35.639986 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:46:35.639997 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:46:35.640007 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:46:35.640018 | orchestrator | 2025-09-23 07:46:35.640029 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-23 07:46:35.640039 | orchestrator | Tuesday 23 September 2025 07:43:23 +0000 (0:00:00.319) 0:00:03.677 ***** 2025-09-23 07:46:35.640050 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-09-23 07:46:35.640060 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-09-23 07:46:35.640071 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-09-23 07:46:35.640081 | orchestrator | 2025-09-23 07:46:35.640092 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-09-23 07:46:35.640102 | orchestrator | 2025-09-23 07:46:35.640113 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-09-23 07:46:35.640123 | orchestrator | Tuesday 23 September 2025 07:43:23 +0000 (0:00:00.543) 0:00:04.221 ***** 2025-09-23 07:46:35.640134 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-23 07:46:35.640145 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-23 07:46:35.640155 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-23 07:46:35.640166 | orchestrator | 2025-09-23 07:46:35.640177 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-23 07:46:35.640187 | orchestrator | Tuesday 23 September 2025 07:43:24 +0000 (0:00:00.400) 0:00:04.621 ***** 2025-09-23 07:46:35.640198 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:46:35.640243 | orchestrator | 2025-09-23 07:46:35.640257 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-09-23 07:46:35.640269 | orchestrator | Tuesday 23 September 2025 07:43:24 +0000 (0:00:00.612) 0:00:05.234 ***** 2025-09-23 07:46:35.640325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-23 07:46:35.640365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-23 07:46:35.640390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-23 07:46:35.640421 | orchestrator | 2025-09-23 07:46:35.640454 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-09-23 07:46:35.640474 | orchestrator | Tuesday 23 September 2025 07:43:27 +0000 (0:00:02.928) 0:00:08.163 ***** 2025-09-23 07:46:35.640494 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:46:35.640506 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:46:35.640517 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:46:35.640528 | orchestrator | 2025-09-23 07:46:35.640539 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-09-23 07:46:35.640550 | orchestrator | Tuesday 23 September 2025 07:43:28 +0000 (0:00:00.711) 0:00:08.874 ***** 2025-09-23 07:46:35.640696 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:46:35.640719 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:46:35.640738 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:46:35.640757 | orchestrator | 2025-09-23 07:46:35.640776 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-09-23 07:46:35.640873 | orchestrator | Tuesday 23 September 2025 07:43:30 +0000 (0:00:01.837) 0:00:10.712 ***** 2025-09-23 07:46:35.640899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-23 07:46:35.640936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-23 07:46:35.640967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-23 07:46:35.640988 | orchestrator | 2025-09-23 07:46:35.641000 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-09-23 07:46:35.641018 | orchestrator | Tuesday 23 September 2025 07:43:34 +0000 (0:00:04.236) 0:00:14.948 ***** 2025-09-23 07:46:35.641029 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:46:35.641039 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:46:35.641050 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:46:35.641061 | orchestrator | 2025-09-23 07:46:35.641071 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-09-23 07:46:35.641082 | orchestrator | Tuesday 23 September 2025 07:43:35 +0000 (0:00:01.205) 0:00:16.154 ***** 2025-09-23 07:46:35.641092 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:46:35.641103 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:46:35.641114 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:46:35.641124 | orchestrator | 2025-09-23 07:46:35.641135 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-23 07:46:35.641146 | orchestrator | Tuesday 23 September 2025 07:43:40 +0000 (0:00:04.437) 0:00:20.591 ***** 2025-09-23 07:46:35.641156 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:46:35.641167 | orchestrator | 2025-09-23 07:46:35.641178 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-23 07:46:35.641189 | orchestrator | Tuesday 23 September 2025 07:43:40 +0000 (0:00:00.512) 0:00:21.104 ***** 2025-09-23 07:46:35.641210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-23 07:46:35.641223 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:46:35.641239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-23 07:46:35.641258 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:46:35.641276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-23 07:46:35.641289 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:46:35.641300 | orchestrator | 2025-09-23 07:46:35.641334 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-23 07:46:35.641345 | orchestrator | Tuesday 23 September 2025 07:43:44 +0000 (0:00:03.629) 0:00:24.733 ***** 2025-09-23 07:46:35.641383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-23 07:46:35.641403 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:46:35.641421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-23 07:46:35.641433 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:46:35.641449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-23 07:46:35.641473 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:46:35.641484 | orchestrator | 2025-09-23 07:46:35.641495 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-23 07:46:35.641505 | orchestrator | Tuesday 23 September 2025 07:43:47 +0000 (0:00:02.826) 0:00:27.560 ***** 2025-09-23 07:46:35.641517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-23 07:46:35.641529 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:46:35.641554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-23 07:46:35.641572 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:46:35.641584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-23 07:46:35.641596 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:46:35.641607 | orchestrator | 2025-09-23 07:46:35.641618 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-09-23 07:46:35.641629 | orchestrator | Tuesday 23 September 2025 07:43:50 +0000 (0:00:02.979) 0:00:30.539 ***** 2025-09-23 07:46:35.641653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-23 07:46:35.641673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-23 07:46:35.641694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-23 07:46:35.641713 | orchestrator | 2025-09-23 07:46:35.641724 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-09-23 07:46:35.641734 | orchestrator | Tuesday 23 September 2025 07:43:53 +0000 (0:00:03.586) 0:00:34.125 ***** 2025-09-23 07:46:35.641745 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:46:35.641760 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:46:35.641771 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:46:35.641782 | orchestrator | 2025-09-23 07:46:35.641792 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-09-23 07:46:35.641803 | orchestrator | Tuesday 23 September 2025 07:43:54 +0000 (0:00:00.917) 0:00:35.042 ***** 2025-09-23 07:46:35.641814 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:46:35.641825 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:46:35.641836 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:46:35.641847 | orchestrator | 2025-09-23 07:46:35.641858 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-09-23 07:46:35.641869 | orchestrator | Tuesday 23 September 2025 07:43:55 +0000 (0:00:00.528) 0:00:35.571 ***** 2025-09-23 07:46:35.641880 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:46:35.641891 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:46:35.641902 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:46:35.641913 | orchestrator | 2025-09-23 07:46:35.641924 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-09-23 07:46:35.641936 | orchestrator | Tuesday 23 September 2025 07:43:55 +0000 (0:00:00.311) 0:00:35.882 ***** 2025-09-23 07:46:35.641948 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-09-23 07:46:35.641959 | orchestrator | ...ignoring 2025-09-23 07:46:35.641970 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-09-23 07:46:35.641981 | orchestrator | ...ignoring 2025-09-23 07:46:35.641992 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-09-23 07:46:35.642003 | orchestrator | ...ignoring 2025-09-23 07:46:35.642054 | orchestrator | 2025-09-23 07:46:35.642070 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-09-23 07:46:35.642081 | orchestrator | Tuesday 23 September 2025 07:44:06 +0000 (0:00:10.950) 0:00:46.833 ***** 2025-09-23 07:46:35.642091 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:46:35.642102 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:46:35.642113 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:46:35.642123 | orchestrator | 2025-09-23 07:46:35.642134 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-09-23 07:46:35.642145 | orchestrator | Tuesday 23 September 2025 07:44:06 +0000 (0:00:00.418) 0:00:47.252 ***** 2025-09-23 07:46:35.642156 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:46:35.642167 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:46:35.642177 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:46:35.642188 | orchestrator | 2025-09-23 07:46:35.642199 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-09-23 07:46:35.642210 | orchestrator | Tuesday 23 September 2025 07:44:07 +0000 (0:00:00.621) 0:00:47.874 ***** 2025-09-23 07:46:35.642221 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:46:35.642231 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:46:35.642242 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:46:35.642253 | orchestrator | 2025-09-23 07:46:35.642263 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-09-23 07:46:35.642282 | orchestrator | Tuesday 23 September 2025 07:44:08 +0000 (0:00:00.471) 0:00:48.345 ***** 2025-09-23 07:46:35.642293 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:46:35.642319 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:46:35.642330 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:46:35.642341 | orchestrator | 2025-09-23 07:46:35.642352 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-09-23 07:46:35.642363 | orchestrator | Tuesday 23 September 2025 07:44:08 +0000 (0:00:00.485) 0:00:48.831 ***** 2025-09-23 07:46:35.642374 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:46:35.642384 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:46:35.642395 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:46:35.642406 | orchestrator | 2025-09-23 07:46:35.642417 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-09-23 07:46:35.642428 | orchestrator | Tuesday 23 September 2025 07:44:08 +0000 (0:00:00.423) 0:00:49.254 ***** 2025-09-23 07:46:35.642445 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:46:35.642456 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:46:35.642467 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:46:35.642478 | orchestrator | 2025-09-23 07:46:35.642489 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-23 07:46:35.642500 | orchestrator | Tuesday 23 September 2025 07:44:09 +0000 (0:00:00.913) 0:00:50.168 ***** 2025-09-23 07:46:35.642510 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:46:35.642521 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:46:35.642532 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-09-23 07:46:35.642543 | orchestrator | 2025-09-23 07:46:35.642554 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-09-23 07:46:35.642565 | orchestrator | Tuesday 23 September 2025 07:44:10 +0000 (0:00:00.375) 0:00:50.544 ***** 2025-09-23 07:46:35.642576 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:46:35.642586 | orchestrator | 2025-09-23 07:46:35.642597 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-09-23 07:46:35.642608 | orchestrator | Tuesday 23 September 2025 07:44:21 +0000 (0:00:11.159) 0:01:01.703 ***** 2025-09-23 07:46:35.642618 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:46:35.642629 | orchestrator | 2025-09-23 07:46:35.642640 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-23 07:46:35.642651 | orchestrator | Tuesday 23 September 2025 07:44:21 +0000 (0:00:00.140) 0:01:01.844 ***** 2025-09-23 07:46:35.642661 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:46:35.642672 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:46:35.642683 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:46:35.642693 | orchestrator | 2025-09-23 07:46:35.642704 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-09-23 07:46:35.642715 | orchestrator | Tuesday 23 September 2025 07:44:22 +0000 (0:00:00.971) 0:01:02.816 ***** 2025-09-23 07:46:35.642730 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:46:35.642741 | orchestrator | 2025-09-23 07:46:35.642751 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-09-23 07:46:35.642762 | orchestrator | Tuesday 23 September 2025 07:44:30 +0000 (0:00:07.964) 0:01:10.780 ***** 2025-09-23 07:46:35.642773 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:46:35.642784 | orchestrator | 2025-09-23 07:46:35.642794 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-09-23 07:46:35.642805 | orchestrator | Tuesday 23 September 2025 07:44:32 +0000 (0:00:01.585) 0:01:12.366 ***** 2025-09-23 07:46:35.642815 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:46:35.642826 | orchestrator | 2025-09-23 07:46:35.642837 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-09-23 07:46:35.642847 | orchestrator | Tuesday 23 September 2025 07:44:34 +0000 (0:00:02.613) 0:01:14.979 ***** 2025-09-23 07:46:35.642858 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:46:35.642869 | orchestrator | 2025-09-23 07:46:35.642886 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-09-23 07:46:35.642897 | orchestrator | Tuesday 23 September 2025 07:44:34 +0000 (0:00:00.137) 0:01:15.117 ***** 2025-09-23 07:46:35.642908 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:46:35.642919 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:46:35.642929 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:46:35.642940 | orchestrator | 2025-09-23 07:46:35.642951 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-09-23 07:46:35.642961 | orchestrator | Tuesday 23 September 2025 07:44:35 +0000 (0:00:00.326) 0:01:15.443 ***** 2025-09-23 07:46:35.642972 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:46:35.642983 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-09-23 07:46:35.642993 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:46:35.643004 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:46:35.643015 | orchestrator | 2025-09-23 07:46:35.643026 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-09-23 07:46:35.643036 | orchestrator | skipping: no hosts matched 2025-09-23 07:46:35.643047 | orchestrator | 2025-09-23 07:46:35.643058 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-23 07:46:35.643068 | orchestrator | 2025-09-23 07:46:35.643079 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-23 07:46:35.643090 | orchestrator | Tuesday 23 September 2025 07:44:35 +0000 (0:00:00.571) 0:01:16.015 ***** 2025-09-23 07:46:35.643100 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:46:35.643111 | orchestrator | 2025-09-23 07:46:35.643122 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-23 07:46:35.643132 | orchestrator | Tuesday 23 September 2025 07:44:55 +0000 (0:00:19.921) 0:01:35.937 ***** 2025-09-23 07:46:35.643143 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:46:35.643154 | orchestrator | 2025-09-23 07:46:35.643164 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-23 07:46:35.643175 | orchestrator | Tuesday 23 September 2025 07:45:16 +0000 (0:00:20.594) 0:01:56.532 ***** 2025-09-23 07:46:35.643185 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:46:35.643197 | orchestrator | 2025-09-23 07:46:35.643207 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-23 07:46:35.643218 | orchestrator | 2025-09-23 07:46:35.643229 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-23 07:46:35.643239 | orchestrator | Tuesday 23 September 2025 07:45:18 +0000 (0:00:02.352) 0:01:58.885 ***** 2025-09-23 07:46:35.643250 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:46:35.643261 | orchestrator | 2025-09-23 07:46:35.643272 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-23 07:46:35.643282 | orchestrator | Tuesday 23 September 2025 07:45:38 +0000 (0:00:20.085) 0:02:18.971 ***** 2025-09-23 07:46:35.643293 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:46:35.643359 | orchestrator | 2025-09-23 07:46:35.643372 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-23 07:46:35.643383 | orchestrator | Tuesday 23 September 2025 07:45:59 +0000 (0:00:20.592) 0:02:39.563 ***** 2025-09-23 07:46:35.643394 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:46:35.643405 | orchestrator | 2025-09-23 07:46:35.643416 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-09-23 07:46:35.643427 | orchestrator | 2025-09-23 07:46:35.643445 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-23 07:46:35.643456 | orchestrator | Tuesday 23 September 2025 07:46:02 +0000 (0:00:02.761) 0:02:42.324 ***** 2025-09-23 07:46:35.643467 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:46:35.643477 | orchestrator | 2025-09-23 07:46:35.643488 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-23 07:46:35.643499 | orchestrator | Tuesday 23 September 2025 07:46:13 +0000 (0:00:11.683) 0:02:54.008 ***** 2025-09-23 07:46:35.643509 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:46:35.643528 | orchestrator | 2025-09-23 07:46:35.643539 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-23 07:46:35.643550 | orchestrator | Tuesday 23 September 2025 07:46:18 +0000 (0:00:04.631) 0:02:58.640 ***** 2025-09-23 07:46:35.643560 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:46:35.643571 | orchestrator | 2025-09-23 07:46:35.643582 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-09-23 07:46:35.643592 | orchestrator | 2025-09-23 07:46:35.643603 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-09-23 07:46:35.643614 | orchestrator | Tuesday 23 September 2025 07:46:20 +0000 (0:00:02.441) 0:03:01.082 ***** 2025-09-23 07:46:35.643624 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:46:35.643635 | orchestrator | 2025-09-23 07:46:35.643646 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-09-23 07:46:35.643656 | orchestrator | Tuesday 23 September 2025 07:46:21 +0000 (0:00:00.468) 0:03:01.550 ***** 2025-09-23 07:46:35.643667 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:46:35.643678 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:46:35.643688 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:46:35.643699 | orchestrator | 2025-09-23 07:46:35.643709 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-09-23 07:46:35.643725 | orchestrator | Tuesday 23 September 2025 07:46:23 +0000 (0:00:02.301) 0:03:03.851 ***** 2025-09-23 07:46:35.643736 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:46:35.643747 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:46:35.643757 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:46:35.643768 | orchestrator | 2025-09-23 07:46:35.643778 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-09-23 07:46:35.643789 | orchestrator | Tuesday 23 September 2025 07:46:25 +0000 (0:00:02.333) 0:03:06.185 ***** 2025-09-23 07:46:35.643800 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:46:35.643810 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:46:35.643821 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:46:35.643831 | orchestrator | 2025-09-23 07:46:35.643842 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-09-23 07:46:35.643853 | orchestrator | Tuesday 23 September 2025 07:46:27 +0000 (0:00:02.072) 0:03:08.257 ***** 2025-09-23 07:46:35.643864 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:46:35.643875 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:46:35.643886 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:46:35.643897 | orchestrator | 2025-09-23 07:46:35.643907 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-09-23 07:46:35.643916 | orchestrator | Tuesday 23 September 2025 07:46:30 +0000 (0:00:02.094) 0:03:10.352 ***** 2025-09-23 07:46:35.643926 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:46:35.643935 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:46:35.643945 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:46:35.643954 | orchestrator | 2025-09-23 07:46:35.643964 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-09-23 07:46:35.643973 | orchestrator | Tuesday 23 September 2025 07:46:33 +0000 (0:00:02.973) 0:03:13.325 ***** 2025-09-23 07:46:35.643982 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:46:35.643992 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:46:35.644001 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:46:35.644011 | orchestrator | 2025-09-23 07:46:35.644020 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:46:35.644030 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-23 07:46:35.644040 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-09-23 07:46:35.644050 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-23 07:46:35.644066 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-23 07:46:35.644075 | orchestrator | 2025-09-23 07:46:35.644085 | orchestrator | 2025-09-23 07:46:35.644094 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:46:35.644104 | orchestrator | Tuesday 23 September 2025 07:46:33 +0000 (0:00:00.453) 0:03:13.778 ***** 2025-09-23 07:46:35.644113 | orchestrator | =============================================================================== 2025-09-23 07:46:35.644123 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 41.19s 2025-09-23 07:46:35.644133 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 40.01s 2025-09-23 07:46:35.644142 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.68s 2025-09-23 07:46:35.644151 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 11.16s 2025-09-23 07:46:35.644161 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.95s 2025-09-23 07:46:35.644170 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.96s 2025-09-23 07:46:35.644185 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.11s 2025-09-23 07:46:35.644195 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.63s 2025-09-23 07:46:35.644204 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.44s 2025-09-23 07:46:35.644213 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.24s 2025-09-23 07:46:35.644223 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.63s 2025-09-23 07:46:35.644232 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.59s 2025-09-23 07:46:35.644241 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.98s 2025-09-23 07:46:35.644251 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.97s 2025-09-23 07:46:35.644260 | orchestrator | Check MariaDB service --------------------------------------------------- 2.97s 2025-09-23 07:46:35.644270 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.93s 2025-09-23 07:46:35.644279 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.83s 2025-09-23 07:46:35.644288 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.61s 2025-09-23 07:46:35.644298 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.44s 2025-09-23 07:46:35.644321 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.33s 2025-09-23 07:46:35.644331 | orchestrator | 2025-09-23 07:46:35 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:46:35.644345 | orchestrator | 2025-09-23 07:46:35 | INFO  | Task 5bc57163-75c5-4eb3-a4db-0bfdd4cc59f7 is in state STARTED 2025-09-23 07:46:35.644355 | orchestrator | 2025-09-23 07:46:35 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:46:38.690457 | orchestrator | 2025-09-23 07:46:38 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:46:38.691233 | orchestrator | 2025-09-23 07:46:38 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:46:38.692657 | orchestrator | 2025-09-23 07:46:38 | INFO  | Task 5bc57163-75c5-4eb3-a4db-0bfdd4cc59f7 is in state STARTED 2025-09-23 07:46:38.692672 | orchestrator | 2025-09-23 07:46:38 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:46:41.737480 | orchestrator | 2025-09-23 07:46:41 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:46:41.737580 | orchestrator | 2025-09-23 07:46:41 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:46:41.738693 | orchestrator | 2025-09-23 07:46:41 | INFO  | Task 5bc57163-75c5-4eb3-a4db-0bfdd4cc59f7 is in state STARTED 2025-09-23 07:46:41.738726 | orchestrator | 2025-09-23 07:46:41 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:46:44.784191 | orchestrator | 2025-09-23 07:46:44 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:46:44.784671 | orchestrator | 2025-09-23 07:46:44 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:46:44.789330 | orchestrator | 2025-09-23 07:46:44 | INFO  | Task 5bc57163-75c5-4eb3-a4db-0bfdd4cc59f7 is in state STARTED 2025-09-23 07:46:44.789424 | orchestrator | 2025-09-23 07:46:44 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:46:47.833147 | orchestrator | 2025-09-23 07:46:47 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:46:47.833751 | orchestrator | 2025-09-23 07:46:47 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:46:47.835715 | orchestrator | 2025-09-23 07:46:47 | INFO  | Task 5bc57163-75c5-4eb3-a4db-0bfdd4cc59f7 is in state STARTED 2025-09-23 07:46:47.835820 | orchestrator | 2025-09-23 07:46:47 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:46:50.881568 | orchestrator | 2025-09-23 07:46:50 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:46:50.883853 | orchestrator | 2025-09-23 07:46:50 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:46:50.886487 | orchestrator | 2025-09-23 07:46:50 | INFO  | Task 5bc57163-75c5-4eb3-a4db-0bfdd4cc59f7 is in state STARTED 2025-09-23 07:46:50.886510 | orchestrator | 2025-09-23 07:46:50 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:46:53.914369 | orchestrator | 2025-09-23 07:46:53 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:46:53.918526 | orchestrator | 2025-09-23 07:46:53 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:46:53.919527 | orchestrator | 2025-09-23 07:46:53 | INFO  | Task 5bc57163-75c5-4eb3-a4db-0bfdd4cc59f7 is in state STARTED 2025-09-23 07:46:53.919567 | orchestrator | 2025-09-23 07:46:53 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:46:56.945168 | orchestrator | 2025-09-23 07:46:56 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:46:56.945893 | orchestrator | 2025-09-23 07:46:56 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:46:56.946776 | orchestrator | 2025-09-23 07:46:56 | INFO  | Task 5bc57163-75c5-4eb3-a4db-0bfdd4cc59f7 is in state STARTED 2025-09-23 07:46:56.946814 | orchestrator | 2025-09-23 07:46:56 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:46:59.982645 | orchestrator | 2025-09-23 07:46:59 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:46:59.982920 | orchestrator | 2025-09-23 07:46:59 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:46:59.983513 | orchestrator | 2025-09-23 07:46:59 | INFO  | Task 5bc57163-75c5-4eb3-a4db-0bfdd4cc59f7 is in state STARTED 2025-09-23 07:46:59.983603 | orchestrator | 2025-09-23 07:46:59 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:47:03.028132 | orchestrator | 2025-09-23 07:47:03 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:47:03.028757 | orchestrator | 2025-09-23 07:47:03 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:47:03.029795 | orchestrator | 2025-09-23 07:47:03 | INFO  | Task 5bc57163-75c5-4eb3-a4db-0bfdd4cc59f7 is in state STARTED 2025-09-23 07:47:03.029934 | orchestrator | 2025-09-23 07:47:03 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:47:06.064093 | orchestrator | 2025-09-23 07:47:06 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:47:06.065558 | orchestrator | 2025-09-23 07:47:06 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:47:06.066926 | orchestrator | 2025-09-23 07:47:06 | INFO  | Task 5bc57163-75c5-4eb3-a4db-0bfdd4cc59f7 is in state STARTED 2025-09-23 07:47:06.066950 | orchestrator | 2025-09-23 07:47:06 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:47:09.108045 | orchestrator | 2025-09-23 07:47:09 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:47:09.109342 | orchestrator | 2025-09-23 07:47:09 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:47:09.111094 | orchestrator | 2025-09-23 07:47:09 | INFO  | Task 5bc57163-75c5-4eb3-a4db-0bfdd4cc59f7 is in state STARTED 2025-09-23 07:47:09.111263 | orchestrator | 2025-09-23 07:47:09 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:47:12.141623 | orchestrator | 2025-09-23 07:47:12 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:47:12.143444 | orchestrator | 2025-09-23 07:47:12 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:47:12.145098 | orchestrator | 2025-09-23 07:47:12 | INFO  | Task 5bc57163-75c5-4eb3-a4db-0bfdd4cc59f7 is in state STARTED 2025-09-23 07:47:12.145171 | orchestrator | 2025-09-23 07:47:12 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:47:15.185241 | orchestrator | 2025-09-23 07:47:15 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:47:15.188618 | orchestrator | 2025-09-23 07:47:15 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:47:15.188691 | orchestrator | 2025-09-23 07:47:15 | INFO  | Task 5bc57163-75c5-4eb3-a4db-0bfdd4cc59f7 is in state STARTED 2025-09-23 07:47:15.188715 | orchestrator | 2025-09-23 07:47:15 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:47:18.240464 | orchestrator | 2025-09-23 07:47:18 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:47:18.242560 | orchestrator | 2025-09-23 07:47:18 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:47:18.243700 | orchestrator | 2025-09-23 07:47:18 | INFO  | Task 5bc57163-75c5-4eb3-a4db-0bfdd4cc59f7 is in state STARTED 2025-09-23 07:47:18.243730 | orchestrator | 2025-09-23 07:47:18 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:47:21.285318 | orchestrator | 2025-09-23 07:47:21 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:47:21.287251 | orchestrator | 2025-09-23 07:47:21 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:47:21.288659 | orchestrator | 2025-09-23 07:47:21 | INFO  | Task 5bc57163-75c5-4eb3-a4db-0bfdd4cc59f7 is in state STARTED 2025-09-23 07:47:21.288683 | orchestrator | 2025-09-23 07:47:21 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:47:24.341313 | orchestrator | 2025-09-23 07:47:24 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:47:24.341395 | orchestrator | 2025-09-23 07:47:24 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:47:24.341408 | orchestrator | 2025-09-23 07:47:24 | INFO  | Task 5bc57163-75c5-4eb3-a4db-0bfdd4cc59f7 is in state STARTED 2025-09-23 07:47:24.341442 | orchestrator | 2025-09-23 07:47:24 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:47:27.377154 | orchestrator | 2025-09-23 07:47:27 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:47:27.378189 | orchestrator | 2025-09-23 07:47:27 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:47:27.379400 | orchestrator | 2025-09-23 07:47:27 | INFO  | Task 5bc57163-75c5-4eb3-a4db-0bfdd4cc59f7 is in state STARTED 2025-09-23 07:47:27.379743 | orchestrator | 2025-09-23 07:47:27 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:47:30.414870 | orchestrator | 2025-09-23 07:47:30 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:47:30.417566 | orchestrator | 2025-09-23 07:47:30 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:47:30.419237 | orchestrator | 2025-09-23 07:47:30 | INFO  | Task 5bc57163-75c5-4eb3-a4db-0bfdd4cc59f7 is in state STARTED 2025-09-23 07:47:30.419303 | orchestrator | 2025-09-23 07:47:30 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:47:33.463680 | orchestrator | 2025-09-23 07:47:33 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:47:33.464418 | orchestrator | 2025-09-23 07:47:33 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:47:33.466345 | orchestrator | 2025-09-23 07:47:33 | INFO  | Task 5bc57163-75c5-4eb3-a4db-0bfdd4cc59f7 is in state STARTED 2025-09-23 07:47:33.466373 | orchestrator | 2025-09-23 07:47:33 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:47:36.503471 | orchestrator | 2025-09-23 07:47:36 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:47:36.504937 | orchestrator | 2025-09-23 07:47:36 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:47:36.506657 | orchestrator | 2025-09-23 07:47:36 | INFO  | Task 5bc57163-75c5-4eb3-a4db-0bfdd4cc59f7 is in state STARTED 2025-09-23 07:47:36.506694 | orchestrator | 2025-09-23 07:47:36 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:47:39.555732 | orchestrator | 2025-09-23 07:47:39 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:47:39.557581 | orchestrator | 2025-09-23 07:47:39 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state STARTED 2025-09-23 07:47:39.562771 | orchestrator | 2025-09-23 07:47:39 | INFO  | Task 5bc57163-75c5-4eb3-a4db-0bfdd4cc59f7 is in state STARTED 2025-09-23 07:47:39.562848 | orchestrator | 2025-09-23 07:47:39 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:47:42.609677 | orchestrator | 2025-09-23 07:47:42 | INFO  | Task f4fe6d37-0c40-4208-910b-c338f2848c96 is in state STARTED 2025-09-23 07:47:42.610420 | orchestrator | 2025-09-23 07:47:42 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:47:42.620008 | orchestrator | 2025-09-23 07:47:42 | INFO  | Task 5fff6bb6-96c0-40e2-8f9a-a115d008b19a is in state SUCCESS 2025-09-23 07:47:42.622289 | orchestrator | 2025-09-23 07:47:42.622342 | orchestrator | 2025-09-23 07:47:42.622356 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-09-23 07:47:42.622369 | orchestrator | 2025-09-23 07:47:42.622380 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-23 07:47:42.622399 | orchestrator | Tuesday 23 September 2025 07:45:31 +0000 (0:00:00.672) 0:00:00.672 ***** 2025-09-23 07:47:42.622417 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:47:42.622474 | orchestrator | 2025-09-23 07:47:42.622493 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-23 07:47:42.622512 | orchestrator | Tuesday 23 September 2025 07:45:31 +0000 (0:00:00.614) 0:00:01.286 ***** 2025-09-23 07:47:42.622530 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:47:42.622549 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:47:42.622569 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:47:42.622588 | orchestrator | 2025-09-23 07:47:42.622605 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-23 07:47:42.622622 | orchestrator | Tuesday 23 September 2025 07:45:32 +0000 (0:00:00.636) 0:00:01.923 ***** 2025-09-23 07:47:42.622641 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:47:42.622659 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:47:42.622678 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:47:42.622696 | orchestrator | 2025-09-23 07:47:42.623347 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-23 07:47:42.623368 | orchestrator | Tuesday 23 September 2025 07:45:32 +0000 (0:00:00.308) 0:00:02.231 ***** 2025-09-23 07:47:42.623379 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:47:42.623390 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:47:42.623401 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:47:42.623476 | orchestrator | 2025-09-23 07:47:42.623489 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-23 07:47:42.623500 | orchestrator | Tuesday 23 September 2025 07:45:33 +0000 (0:00:00.811) 0:00:03.042 ***** 2025-09-23 07:47:42.623511 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:47:42.623522 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:47:42.623533 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:47:42.623545 | orchestrator | 2025-09-23 07:47:42.623556 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-23 07:47:42.623567 | orchestrator | Tuesday 23 September 2025 07:45:33 +0000 (0:00:00.348) 0:00:03.391 ***** 2025-09-23 07:47:42.623578 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:47:42.623589 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:47:42.623600 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:47:42.623819 | orchestrator | 2025-09-23 07:47:42.623832 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-23 07:47:42.623843 | orchestrator | Tuesday 23 September 2025 07:45:34 +0000 (0:00:00.321) 0:00:03.713 ***** 2025-09-23 07:47:42.623854 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:47:42.623864 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:47:42.623875 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:47:42.623886 | orchestrator | 2025-09-23 07:47:42.623896 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-23 07:47:42.623907 | orchestrator | Tuesday 23 September 2025 07:45:34 +0000 (0:00:00.305) 0:00:04.019 ***** 2025-09-23 07:47:42.623919 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:47:42.623931 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:47:42.623942 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:47:42.623953 | orchestrator | 2025-09-23 07:47:42.623978 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-23 07:47:42.623990 | orchestrator | Tuesday 23 September 2025 07:45:35 +0000 (0:00:00.484) 0:00:04.503 ***** 2025-09-23 07:47:42.624001 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:47:42.624011 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:47:42.624022 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:47:42.624032 | orchestrator | 2025-09-23 07:47:42.624043 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-23 07:47:42.624054 | orchestrator | Tuesday 23 September 2025 07:45:35 +0000 (0:00:00.294) 0:00:04.798 ***** 2025-09-23 07:47:42.624065 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-23 07:47:42.624075 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-23 07:47:42.624086 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-23 07:47:42.624097 | orchestrator | 2025-09-23 07:47:42.624119 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-23 07:47:42.624130 | orchestrator | Tuesday 23 September 2025 07:45:36 +0000 (0:00:00.648) 0:00:05.446 ***** 2025-09-23 07:47:42.624141 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:47:42.624151 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:47:42.624162 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:47:42.624173 | orchestrator | 2025-09-23 07:47:42.624183 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-23 07:47:42.624194 | orchestrator | Tuesday 23 September 2025 07:45:36 +0000 (0:00:00.420) 0:00:05.867 ***** 2025-09-23 07:47:42.624205 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-23 07:47:42.624215 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-23 07:47:42.624226 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-23 07:47:42.624237 | orchestrator | 2025-09-23 07:47:42.624275 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-23 07:47:42.624286 | orchestrator | Tuesday 23 September 2025 07:45:38 +0000 (0:00:02.240) 0:00:08.108 ***** 2025-09-23 07:47:42.624297 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-23 07:47:42.624308 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-23 07:47:42.624319 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-23 07:47:42.624329 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:47:42.624340 | orchestrator | 2025-09-23 07:47:42.624350 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-23 07:47:42.624405 | orchestrator | Tuesday 23 September 2025 07:45:39 +0000 (0:00:00.417) 0:00:08.525 ***** 2025-09-23 07:47:42.624421 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.624435 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.624449 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.624461 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:47:42.624473 | orchestrator | 2025-09-23 07:47:42.624486 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-23 07:47:42.624498 | orchestrator | Tuesday 23 September 2025 07:45:39 +0000 (0:00:00.725) 0:00:09.251 ***** 2025-09-23 07:47:42.624512 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.624529 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.624547 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.624568 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:47:42.624580 | orchestrator | 2025-09-23 07:47:42.624593 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-23 07:47:42.624605 | orchestrator | Tuesday 23 September 2025 07:45:39 +0000 (0:00:00.140) 0:00:09.392 ***** 2025-09-23 07:47:42.624621 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '7858301a367a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-23 07:45:37.092221', 'end': '2025-09-23 07:45:37.142990', 'delta': '0:00:00.050769', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['7858301a367a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-09-23 07:47:42.624637 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '0ec072e73d9c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-23 07:45:37.901370', 'end': '2025-09-23 07:45:37.947490', 'delta': '0:00:00.046120', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['0ec072e73d9c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-09-23 07:47:42.624683 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '829a4f2a9ec6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-23 07:45:38.517186', 'end': '2025-09-23 07:45:38.559020', 'delta': '0:00:00.041834', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['829a4f2a9ec6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-09-23 07:47:42.624698 | orchestrator | 2025-09-23 07:47:42.624710 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-23 07:47:42.624722 | orchestrator | Tuesday 23 September 2025 07:45:40 +0000 (0:00:00.306) 0:00:09.699 ***** 2025-09-23 07:47:42.624734 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:47:42.624746 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:47:42.624759 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:47:42.624771 | orchestrator | 2025-09-23 07:47:42.624783 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-23 07:47:42.624795 | orchestrator | Tuesday 23 September 2025 07:45:40 +0000 (0:00:00.436) 0:00:10.136 ***** 2025-09-23 07:47:42.624808 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-09-23 07:47:42.624820 | orchestrator | 2025-09-23 07:47:42.624832 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-23 07:47:42.624844 | orchestrator | Tuesday 23 September 2025 07:45:42 +0000 (0:00:01.726) 0:00:11.862 ***** 2025-09-23 07:47:42.624855 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:47:42.624866 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:47:42.624885 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:47:42.624896 | orchestrator | 2025-09-23 07:47:42.624907 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-23 07:47:42.624917 | orchestrator | Tuesday 23 September 2025 07:45:42 +0000 (0:00:00.272) 0:00:12.134 ***** 2025-09-23 07:47:42.624928 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:47:42.624938 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:47:42.624949 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:47:42.624960 | orchestrator | 2025-09-23 07:47:42.624970 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-23 07:47:42.624981 | orchestrator | Tuesday 23 September 2025 07:45:43 +0000 (0:00:00.394) 0:00:12.528 ***** 2025-09-23 07:47:42.624992 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:47:42.625002 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:47:42.625013 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:47:42.625023 | orchestrator | 2025-09-23 07:47:42.625034 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-23 07:47:42.625045 | orchestrator | Tuesday 23 September 2025 07:45:43 +0000 (0:00:00.563) 0:00:13.092 ***** 2025-09-23 07:47:42.625055 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:47:42.625066 | orchestrator | 2025-09-23 07:47:42.625077 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-23 07:47:42.625093 | orchestrator | Tuesday 23 September 2025 07:45:43 +0000 (0:00:00.139) 0:00:13.232 ***** 2025-09-23 07:47:42.625104 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:47:42.625114 | orchestrator | 2025-09-23 07:47:42.625125 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-23 07:47:42.625135 | orchestrator | Tuesday 23 September 2025 07:45:44 +0000 (0:00:00.225) 0:00:13.458 ***** 2025-09-23 07:47:42.625146 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:47:42.625156 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:47:42.625167 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:47:42.625178 | orchestrator | 2025-09-23 07:47:42.625188 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-23 07:47:42.625199 | orchestrator | Tuesday 23 September 2025 07:45:44 +0000 (0:00:00.302) 0:00:13.760 ***** 2025-09-23 07:47:42.625209 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:47:42.625220 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:47:42.625230 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:47:42.625241 | orchestrator | 2025-09-23 07:47:42.625288 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-23 07:47:42.625299 | orchestrator | Tuesday 23 September 2025 07:45:44 +0000 (0:00:00.336) 0:00:14.096 ***** 2025-09-23 07:47:42.625310 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:47:42.625321 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:47:42.625332 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:47:42.625342 | orchestrator | 2025-09-23 07:47:42.625353 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-23 07:47:42.625364 | orchestrator | Tuesday 23 September 2025 07:45:45 +0000 (0:00:00.544) 0:00:14.641 ***** 2025-09-23 07:47:42.625375 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:47:42.625386 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:47:42.625396 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:47:42.625407 | orchestrator | 2025-09-23 07:47:42.625418 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-23 07:47:42.625429 | orchestrator | Tuesday 23 September 2025 07:45:45 +0000 (0:00:00.342) 0:00:14.984 ***** 2025-09-23 07:47:42.625439 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:47:42.625450 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:47:42.625461 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:47:42.625471 | orchestrator | 2025-09-23 07:47:42.625482 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-23 07:47:42.625493 | orchestrator | Tuesday 23 September 2025 07:45:45 +0000 (0:00:00.325) 0:00:15.309 ***** 2025-09-23 07:47:42.625512 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:47:42.625522 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:47:42.625533 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:47:42.625544 | orchestrator | 2025-09-23 07:47:42.625555 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-23 07:47:42.625597 | orchestrator | Tuesday 23 September 2025 07:45:46 +0000 (0:00:00.336) 0:00:15.645 ***** 2025-09-23 07:47:42.625610 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:47:42.625621 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:47:42.625632 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:47:42.625643 | orchestrator | 2025-09-23 07:47:42.625653 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-23 07:47:42.625664 | orchestrator | Tuesday 23 September 2025 07:45:46 +0000 (0:00:00.530) 0:00:16.176 ***** 2025-09-23 07:47:42.625676 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fa3e03eb--2d2a--5719--835a--39fedcc9009f-osd--block--fa3e03eb--2d2a--5719--835a--39fedcc9009f', 'dm-uuid-LVM-FHNXkK9ifNZQ8LWRnVtzawWUcnaTHNMoPTyR0SdHm9HYDyijezVy6TPXhKueqSbf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-23 07:47:42.625688 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0570cb7e--4d0f--57ea--8b12--da850e205fc7-osd--block--0570cb7e--4d0f--57ea--8b12--da850e205fc7', 'dm-uuid-LVM-2TNdLbVMERZXZ4qd8SvwGerVO8RLuWtHtDHFMeuc0zIMJys19eeLIYKnGH02vLY1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-23 07:47:42.625700 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:47:42.625717 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:47:42.625728 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:47:42.625740 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:47:42.625750 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:47:42.625798 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:47:42.625811 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:47:42.625822 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:47:42.625842 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604', 'scsi-SQEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604-part1', 'scsi-SQEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604-part14', 'scsi-SQEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604-part15', 'scsi-SQEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604-part16', 'scsi-SQEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-23 07:47:42.625856 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7ede7e8c--1177--5738--bf30--f710eefa62dc-osd--block--7ede7e8c--1177--5738--bf30--f710eefa62dc', 'dm-uuid-LVM-NgZG5ji7HfB8IV2bPu8OBaDhXaBEH7UTodp14LDJ4eKUh9n0XoDCobhqh5FDEw3z'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-23 07:47:42.625905 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--fa3e03eb--2d2a--5719--835a--39fedcc9009f-osd--block--fa3e03eb--2d2a--5719--835a--39fedcc9009f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fM7ljo-Z5R8-Q0ef-6yah-KEG0-a75U-AGeTru', 'scsi-0QEMU_QEMU_HARDDISK_c90ab8a7-6741-4b53-9264-08db4b9d41dd', 'scsi-SQEMU_QEMU_HARDDISK_c90ab8a7-6741-4b53-9264-08db4b9d41dd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-23 07:47:42.625918 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6b345e42--d385--5c5d--ac31--471707d336a3-osd--block--6b345e42--d385--5c5d--ac31--471707d336a3', 'dm-uuid-LVM-mU61CxIWGB9jUTaZh0QsW622LIJSrNWwZCkPiiwyQdUqROx0Bq2Hs4DONnWa7GAS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-23 07:47:42.625930 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--0570cb7e--4d0f--57ea--8b12--da850e205fc7-osd--block--0570cb7e--4d0f--57ea--8b12--da850e205fc7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-D6rneW-lget-KtMe-Abei-G9R2-y4e5-RfJi6o', 'scsi-0QEMU_QEMU_HARDDISK_59088487-bcaf-4b18-9006-b2b85c395676', 'scsi-SQEMU_QEMU_HARDDISK_59088487-bcaf-4b18-9006-b2b85c395676'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-23 07:47:42.625942 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:47:42.625965 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c71f819-4704-4446-9599-7b21db8e3013', 'scsi-SQEMU_QEMU_HARDDISK_7c71f819-4704-4446-9599-7b21db8e3013'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-23 07:47:42.625979 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:47:42.625997 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-23-06-52-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-23 07:47:42.626084 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:47:42.626101 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:47:42.626113 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:47:42.626125 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:47:42.626137 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:47:42.626149 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:47:42.626161 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:47:42.626189 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7', 'scsi-SQEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7-part1', 'scsi-SQEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7-part14', 'scsi-SQEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7-part15', 'scsi-SQEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7-part16', 'scsi-SQEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-23 07:47:42.626213 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4a27826e--7697--5dae--8bcf--65313ee63b58-osd--block--4a27826e--7697--5dae--8bcf--65313ee63b58', 'dm-uuid-LVM-6ficvLhRpdNC4bqCip3odIJa81AcAI17S3rd6t4DcCeiq1oknBitZJNhGfd7TN5u'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-23 07:47:42.626225 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7ede7e8c--1177--5738--bf30--f710eefa62dc-osd--block--7ede7e8c--1177--5738--bf30--f710eefa62dc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xWpyYw-F1EM-syGU-5CgF-O2Pl-ep3M-c1Skla', 'scsi-0QEMU_QEMU_HARDDISK_0bff4510-9eaf-4f53-bf1a-5cee4a2246ec', 'scsi-SQEMU_QEMU_HARDDISK_0bff4510-9eaf-4f53-bf1a-5cee4a2246ec'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-23 07:47:42.626237 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b31a677e--efd4--57fc--b4ad--0e2207d5fa48-osd--block--b31a677e--efd4--57fc--b4ad--0e2207d5fa48', 'dm-uuid-LVM-e1XWlmUNqKg5peDV3v4Azb7L4vfb5JWGcwIpmZeqpT0ODLsARXlZJISNgmu0cQSb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-23 07:47:42.626271 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6b345e42--d385--5c5d--ac31--471707d336a3-osd--block--6b345e42--d385--5c5d--ac31--471707d336a3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-crt05h-mBkl-dd9g-xK1l-c3FO-Ip8Q-BJ18xz', 'scsi-0QEMU_QEMU_HARDDISK_fd6a0863-0d42-4019-9e23-eb994da62dbd', 'scsi-SQEMU_QEMU_HARDDISK_fd6a0863-0d42-4019-9e23-eb994da62dbd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-23 07:47:42.626291 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:47:42.626304 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_87ebb364-ac90-40d8-a46a-ebfab3ab7b91', 'scsi-SQEMU_QEMU_HARDDISK_87ebb364-ac90-40d8-a46a-ebfab3ab7b91'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-23 07:47:42.626327 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:47:42.626339 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-23-06-52-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-23 07:47:42.626351 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:47:42.626362 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:47:42.626374 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:47:42.626386 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:47:42.626402 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:47:42.626421 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:47:42.626432 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-23 07:47:42.626454 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269', 'scsi-SQEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269-part1', 'scsi-SQEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269-part14', 'scsi-SQEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269-part15', 'scsi-SQEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269-part16', 'scsi-SQEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-23 07:47:42.626472 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4a27826e--7697--5dae--8bcf--65313ee63b58-osd--block--4a27826e--7697--5dae--8bcf--65313ee63b58'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3i9NQd-zCuN-te3J-sjJW-E1KT-pOAG-TIscye', 'scsi-0QEMU_QEMU_HARDDISK_5c88e186-44c4-4f29-a716-3e862e71c173', 'scsi-SQEMU_QEMU_HARDDISK_5c88e186-44c4-4f29-a716-3e862e71c173'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-23 07:47:42.626491 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b31a677e--efd4--57fc--b4ad--0e2207d5fa48-osd--block--b31a677e--efd4--57fc--b4ad--0e2207d5fa48'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VzVui3-jDRW-PDPs-G4T4-m0ml-2P0A-V3kUfU', 'scsi-0QEMU_QEMU_HARDDISK_b75d5c1f-0301-4e14-8d60-793226b090b6', 'scsi-SQEMU_QEMU_HARDDISK_b75d5c1f-0301-4e14-8d60-793226b090b6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-23 07:47:42.626504 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c2ff2f17-feac-486a-a8d3-f5343e47e8fb', 'scsi-SQEMU_QEMU_HARDDISK_c2ff2f17-feac-486a-a8d3-f5343e47e8fb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-23 07:47:42.626523 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-23-06-52-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-23 07:47:42.626535 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:47:42.626546 | orchestrator | 2025-09-23 07:47:42.626558 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-23 07:47:42.626569 | orchestrator | Tuesday 23 September 2025 07:45:47 +0000 (0:00:00.547) 0:00:16.724 ***** 2025-09-23 07:47:42.626582 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fa3e03eb--2d2a--5719--835a--39fedcc9009f-osd--block--fa3e03eb--2d2a--5719--835a--39fedcc9009f', 'dm-uuid-LVM-FHNXkK9ifNZQ8LWRnVtzawWUcnaTHNMoPTyR0SdHm9HYDyijezVy6TPXhKueqSbf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.626594 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0570cb7e--4d0f--57ea--8b12--da850e205fc7-osd--block--0570cb7e--4d0f--57ea--8b12--da850e205fc7', 'dm-uuid-LVM-2TNdLbVMERZXZ4qd8SvwGerVO8RLuWtHtDHFMeuc0zIMJys19eeLIYKnGH02vLY1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.626611 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.626630 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.626642 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.626661 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.626673 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.626685 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.626697 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.626719 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.626741 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604', 'scsi-SQEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604-part1', 'scsi-SQEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604-part14', 'scsi-SQEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604-part15', 'scsi-SQEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604-part16', 'scsi-SQEMU_QEMU_HARDDISK_6cc438e1-0ca2-4ae5-90bc-25cf54c9d604-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.626754 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7ede7e8c--1177--5738--bf30--f710eefa62dc-osd--block--7ede7e8c--1177--5738--bf30--f710eefa62dc', 'dm-uuid-LVM-NgZG5ji7HfB8IV2bPu8OBaDhXaBEH7UTodp14LDJ4eKUh9n0XoDCobhqh5FDEw3z'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.626781 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--fa3e03eb--2d2a--5719--835a--39fedcc9009f-osd--block--fa3e03eb--2d2a--5719--835a--39fedcc9009f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fM7ljo-Z5R8-Q0ef-6yah-KEG0-a75U-AGeTru', 'scsi-0QEMU_QEMU_HARDDISK_c90ab8a7-6741-4b53-9264-08db4b9d41dd', 'scsi-SQEMU_QEMU_HARDDISK_c90ab8a7-6741-4b53-9264-08db4b9d41dd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.626793 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6b345e42--d385--5c5d--ac31--471707d336a3-osd--block--6b345e42--d385--5c5d--ac31--471707d336a3', 'dm-uuid-LVM-mU61CxIWGB9jUTaZh0QsW622LIJSrNWwZCkPiiwyQdUqROx0Bq2Hs4DONnWa7GAS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.626812 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--0570cb7e--4d0f--57ea--8b12--da850e205fc7-osd--block--0570cb7e--4d0f--57ea--8b12--da850e205fc7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-D6rneW-lget-KtMe-Abei-G9R2-y4e5-RfJi6o', 'scsi-0QEMU_QEMU_HARDDISK_59088487-bcaf-4b18-9006-b2b85c395676', 'scsi-SQEMU_QEMU_HARDDISK_59088487-bcaf-4b18-9006-b2b85c395676'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.626825 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.626837 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c71f819-4704-4446-9599-7b21db8e3013', 'scsi-SQEMU_QEMU_HARDDISK_7c71f819-4704-4446-9599-7b21db8e3013'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.626859 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.626872 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-23-06-52-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.626884 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.626903 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.626915 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.626927 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.626946 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.626963 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.626983 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7', 'scsi-SQEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7-part1', 'scsi-SQEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7-part14', 'scsi-SQEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7-part15', 'scsi-SQEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7-part16', 'scsi-SQEMU_QEMU_HARDDISK_2db81c41-a192-4c8d-88cd-7bf1813310e7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.626996 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--7ede7e8c--1177--5738--bf30--f710eefa62dc-osd--block--7ede7e8c--1177--5738--bf30--f710eefa62dc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xWpyYw-F1EM-syGU-5CgF-O2Pl-ep3M-c1Skla', 'scsi-0QEMU_QEMU_HARDDISK_0bff4510-9eaf-4f53-bf1a-5cee4a2246ec', 'scsi-SQEMU_QEMU_HARDDISK_0bff4510-9eaf-4f53-bf1a-5cee4a2246ec'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.627019 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--6b345e42--d385--5c5d--ac31--471707d336a3-osd--block--6b345e42--d385--5c5d--ac31--471707d336a3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-crt05h-mBkl-dd9g-xK1l-c3FO-Ip8Q-BJ18xz', 'scsi-0QEMU_QEMU_HARDDISK_fd6a0863-0d42-4019-9e23-eb994da62dbd', 'scsi-SQEMU_QEMU_HARDDISK_fd6a0863-0d42-4019-9e23-eb994da62dbd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.627031 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:47:42.627043 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_87ebb364-ac90-40d8-a46a-ebfab3ab7b91', 'scsi-SQEMU_QEMU_HARDDISK_87ebb364-ac90-40d8-a46a-ebfab3ab7b91'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.627063 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-23-06-52-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.627075 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:47:42.627087 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4a27826e--7697--5dae--8bcf--65313ee63b58-osd--block--4a27826e--7697--5dae--8bcf--65313ee63b58', 'dm-uuid-LVM-6ficvLhRpdNC4bqCip3odIJa81AcAI17S3rd6t4DcCeiq1oknBitZJNhGfd7TN5u'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.627109 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b31a677e--efd4--57fc--b4ad--0e2207d5fa48-osd--block--b31a677e--efd4--57fc--b4ad--0e2207d5fa48', 'dm-uuid-LVM-e1XWlmUNqKg5peDV3v4Azb7L4vfb5JWGcwIpmZeqpT0ODLsARXlZJISNgmu0cQSb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.627126 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.627138 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.627150 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.627173 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.627192 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.627210 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.627237 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.627286 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.627319 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269', 'scsi-SQEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269-part1', 'scsi-SQEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269-part14', 'scsi-SQEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269-part15', 'scsi-SQEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269-part16', 'scsi-SQEMU_QEMU_HARDDISK_7e55276b-9f20-4253-94e7-5773ee8b5269-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.627350 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4a27826e--7697--5dae--8bcf--65313ee63b58-osd--block--4a27826e--7697--5dae--8bcf--65313ee63b58'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3i9NQd-zCuN-te3J-sjJW-E1KT-pOAG-TIscye', 'scsi-0QEMU_QEMU_HARDDISK_5c88e186-44c4-4f29-a716-3e862e71c173', 'scsi-SQEMU_QEMU_HARDDISK_5c88e186-44c4-4f29-a716-3e862e71c173'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.627367 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b31a677e--efd4--57fc--b4ad--0e2207d5fa48-osd--block--b31a677e--efd4--57fc--b4ad--0e2207d5fa48'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VzVui3-jDRW-PDPs-G4T4-m0ml-2P0A-V3kUfU', 'scsi-0QEMU_QEMU_HARDDISK_b75d5c1f-0301-4e14-8d60-793226b090b6', 'scsi-SQEMU_QEMU_HARDDISK_b75d5c1f-0301-4e14-8d60-793226b090b6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.627379 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c2ff2f17-feac-486a-a8d3-f5343e47e8fb', 'scsi-SQEMU_QEMU_HARDDISK_c2ff2f17-feac-486a-a8d3-f5343e47e8fb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.627397 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-23-06-52-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-23 07:47:42.627409 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:47:42.627420 | orchestrator | 2025-09-23 07:47:42.627431 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-23 07:47:42.627442 | orchestrator | Tuesday 23 September 2025 07:45:48 +0000 (0:00:00.722) 0:00:17.446 ***** 2025-09-23 07:47:42.627453 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:47:42.627471 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:47:42.627482 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:47:42.627493 | orchestrator | 2025-09-23 07:47:42.627504 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-23 07:47:42.627515 | orchestrator | Tuesday 23 September 2025 07:45:48 +0000 (0:00:00.772) 0:00:18.219 ***** 2025-09-23 07:47:42.627525 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:47:42.627536 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:47:42.627547 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:47:42.627558 | orchestrator | 2025-09-23 07:47:42.627569 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-23 07:47:42.627579 | orchestrator | Tuesday 23 September 2025 07:45:49 +0000 (0:00:00.496) 0:00:18.716 ***** 2025-09-23 07:47:42.627590 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:47:42.627600 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:47:42.627611 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:47:42.627622 | orchestrator | 2025-09-23 07:47:42.627632 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-23 07:47:42.627643 | orchestrator | Tuesday 23 September 2025 07:45:50 +0000 (0:00:00.765) 0:00:19.481 ***** 2025-09-23 07:47:42.627654 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:47:42.627664 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:47:42.627675 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:47:42.627686 | orchestrator | 2025-09-23 07:47:42.627696 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-23 07:47:42.627707 | orchestrator | Tuesday 23 September 2025 07:45:50 +0000 (0:00:00.297) 0:00:19.779 ***** 2025-09-23 07:47:42.627717 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:47:42.627728 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:47:42.627738 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:47:42.627749 | orchestrator | 2025-09-23 07:47:42.627760 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-23 07:47:42.627771 | orchestrator | Tuesday 23 September 2025 07:45:50 +0000 (0:00:00.419) 0:00:20.198 ***** 2025-09-23 07:47:42.627781 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:47:42.627792 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:47:42.627802 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:47:42.627813 | orchestrator | 2025-09-23 07:47:42.627824 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-23 07:47:42.627835 | orchestrator | Tuesday 23 September 2025 07:45:51 +0000 (0:00:00.549) 0:00:20.747 ***** 2025-09-23 07:47:42.627846 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-23 07:47:42.627857 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-23 07:47:42.627873 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-23 07:47:42.627884 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-23 07:47:42.627895 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-23 07:47:42.627906 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-23 07:47:42.627916 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-23 07:47:42.627927 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-23 07:47:42.627937 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-23 07:47:42.627948 | orchestrator | 2025-09-23 07:47:42.627959 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-23 07:47:42.627970 | orchestrator | Tuesday 23 September 2025 07:45:52 +0000 (0:00:00.844) 0:00:21.592 ***** 2025-09-23 07:47:42.627981 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-23 07:47:42.627991 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-23 07:47:42.628002 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-23 07:47:42.628012 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:47:42.628023 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-23 07:47:42.628034 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-23 07:47:42.628055 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-23 07:47:42.628066 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:47:42.628076 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-23 07:47:42.628087 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-23 07:47:42.628097 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-23 07:47:42.628108 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:47:42.628119 | orchestrator | 2025-09-23 07:47:42.628129 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-23 07:47:42.628140 | orchestrator | Tuesday 23 September 2025 07:45:52 +0000 (0:00:00.357) 0:00:21.950 ***** 2025-09-23 07:47:42.628151 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:47:42.628162 | orchestrator | 2025-09-23 07:47:42.628173 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-23 07:47:42.628184 | orchestrator | Tuesday 23 September 2025 07:45:53 +0000 (0:00:00.720) 0:00:22.670 ***** 2025-09-23 07:47:42.628195 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:47:42.628206 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:47:42.628216 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:47:42.628227 | orchestrator | 2025-09-23 07:47:42.628243 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-23 07:47:42.628272 | orchestrator | Tuesday 23 September 2025 07:45:53 +0000 (0:00:00.316) 0:00:22.986 ***** 2025-09-23 07:47:42.628283 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:47:42.628294 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:47:42.628305 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:47:42.628316 | orchestrator | 2025-09-23 07:47:42.628326 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-23 07:47:42.628337 | orchestrator | Tuesday 23 September 2025 07:45:53 +0000 (0:00:00.319) 0:00:23.306 ***** 2025-09-23 07:47:42.628348 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:47:42.628359 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:47:42.628370 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:47:42.628381 | orchestrator | 2025-09-23 07:47:42.628392 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-23 07:47:42.628403 | orchestrator | Tuesday 23 September 2025 07:45:54 +0000 (0:00:00.343) 0:00:23.649 ***** 2025-09-23 07:47:42.628413 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:47:42.628424 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:47:42.628435 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:47:42.628446 | orchestrator | 2025-09-23 07:47:42.628457 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-23 07:47:42.628468 | orchestrator | Tuesday 23 September 2025 07:45:54 +0000 (0:00:00.598) 0:00:24.248 ***** 2025-09-23 07:47:42.628479 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-23 07:47:42.628490 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-23 07:47:42.628501 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-23 07:47:42.628511 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:47:42.628522 | orchestrator | 2025-09-23 07:47:42.628533 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-23 07:47:42.628544 | orchestrator | Tuesday 23 September 2025 07:45:55 +0000 (0:00:00.372) 0:00:24.620 ***** 2025-09-23 07:47:42.628555 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-23 07:47:42.628566 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-23 07:47:42.628577 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-23 07:47:42.628587 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:47:42.628598 | orchestrator | 2025-09-23 07:47:42.628609 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-23 07:47:42.628627 | orchestrator | Tuesday 23 September 2025 07:45:55 +0000 (0:00:00.400) 0:00:25.021 ***** 2025-09-23 07:47:42.628638 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-23 07:47:42.628649 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-23 07:47:42.628660 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-23 07:47:42.628671 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:47:42.628681 | orchestrator | 2025-09-23 07:47:42.628692 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-23 07:47:42.628703 | orchestrator | Tuesday 23 September 2025 07:45:56 +0000 (0:00:00.413) 0:00:25.435 ***** 2025-09-23 07:47:42.628714 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:47:42.628725 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:47:42.628741 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:47:42.628752 | orchestrator | 2025-09-23 07:47:42.628763 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-23 07:47:42.628774 | orchestrator | Tuesday 23 September 2025 07:45:56 +0000 (0:00:00.329) 0:00:25.764 ***** 2025-09-23 07:47:42.628785 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-23 07:47:42.628796 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-23 07:47:42.628806 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-23 07:47:42.628817 | orchestrator | 2025-09-23 07:47:42.628828 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-23 07:47:42.628839 | orchestrator | Tuesday 23 September 2025 07:45:56 +0000 (0:00:00.530) 0:00:26.295 ***** 2025-09-23 07:47:42.628850 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-23 07:47:42.628861 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-23 07:47:42.628872 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-23 07:47:42.628883 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-23 07:47:42.628894 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-23 07:47:42.628904 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-23 07:47:42.628915 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-23 07:47:42.628926 | orchestrator | 2025-09-23 07:47:42.628937 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-23 07:47:42.628948 | orchestrator | Tuesday 23 September 2025 07:45:57 +0000 (0:00:01.016) 0:00:27.311 ***** 2025-09-23 07:47:42.628959 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-23 07:47:42.628970 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-23 07:47:42.628980 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-23 07:47:42.628991 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-23 07:47:42.629002 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-23 07:47:42.629013 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-23 07:47:42.629024 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-23 07:47:42.629035 | orchestrator | 2025-09-23 07:47:42.629051 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-09-23 07:47:42.629062 | orchestrator | Tuesday 23 September 2025 07:45:59 +0000 (0:00:02.024) 0:00:29.336 ***** 2025-09-23 07:47:42.629073 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:47:42.629083 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:47:42.629094 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-09-23 07:47:42.629105 | orchestrator | 2025-09-23 07:47:42.629115 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-09-23 07:47:42.629133 | orchestrator | Tuesday 23 September 2025 07:46:00 +0000 (0:00:00.371) 0:00:29.707 ***** 2025-09-23 07:47:42.629144 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-23 07:47:42.629155 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-23 07:47:42.629167 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-23 07:47:42.629178 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-23 07:47:42.629189 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-23 07:47:42.629200 | orchestrator | 2025-09-23 07:47:42.629210 | orchestrator | TASK [generate keys] *********************************************************** 2025-09-23 07:47:42.629221 | orchestrator | Tuesday 23 September 2025 07:46:45 +0000 (0:00:45.384) 0:01:15.092 ***** 2025-09-23 07:47:42.629232 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-23 07:47:42.629243 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-23 07:47:42.629308 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-23 07:47:42.629319 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-23 07:47:42.629330 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-23 07:47:42.629340 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-23 07:47:42.629351 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-09-23 07:47:42.629362 | orchestrator | 2025-09-23 07:47:42.629372 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-09-23 07:47:42.629383 | orchestrator | Tuesday 23 September 2025 07:47:10 +0000 (0:00:25.260) 0:01:40.353 ***** 2025-09-23 07:47:42.629394 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-23 07:47:42.629405 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-23 07:47:42.629416 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-23 07:47:42.629427 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-23 07:47:42.629438 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-23 07:47:42.629449 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-23 07:47:42.629459 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-23 07:47:42.629470 | orchestrator | 2025-09-23 07:47:42.629481 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-09-23 07:47:42.629492 | orchestrator | Tuesday 23 September 2025 07:47:23 +0000 (0:00:12.367) 0:01:52.720 ***** 2025-09-23 07:47:42.629502 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-23 07:47:42.629520 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-23 07:47:42.629531 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-23 07:47:42.629542 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-23 07:47:42.629553 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-23 07:47:42.629563 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-23 07:47:42.629580 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-23 07:47:42.629591 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-23 07:47:42.629602 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-23 07:47:42.629612 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-23 07:47:42.629623 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-23 07:47:42.629634 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-23 07:47:42.629644 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-23 07:47:42.629653 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-23 07:47:42.629662 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-23 07:47:42.629672 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-23 07:47:42.629713 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-23 07:47:42.629723 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-23 07:47:42.629733 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-09-23 07:47:42.629743 | orchestrator | 2025-09-23 07:47:42.629752 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:47:42.629761 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-09-23 07:47:42.629772 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-23 07:47:42.629782 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-23 07:47:42.629791 | orchestrator | 2025-09-23 07:47:42.629801 | orchestrator | 2025-09-23 07:47:42.629810 | orchestrator | 2025-09-23 07:47:42.629819 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:47:42.629829 | orchestrator | Tuesday 23 September 2025 07:47:40 +0000 (0:00:17.415) 0:02:10.136 ***** 2025-09-23 07:47:42.629838 | orchestrator | =============================================================================== 2025-09-23 07:47:42.629848 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.38s 2025-09-23 07:47:42.629857 | orchestrator | generate keys ---------------------------------------------------------- 25.26s 2025-09-23 07:47:42.629867 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.42s 2025-09-23 07:47:42.629876 | orchestrator | get keys from monitors ------------------------------------------------- 12.37s 2025-09-23 07:47:42.629891 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.24s 2025-09-23 07:47:42.629901 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.02s 2025-09-23 07:47:42.629910 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.73s 2025-09-23 07:47:42.629920 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.02s 2025-09-23 07:47:42.629935 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.84s 2025-09-23 07:47:42.629945 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.81s 2025-09-23 07:47:42.629954 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.77s 2025-09-23 07:47:42.629964 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.77s 2025-09-23 07:47:42.629973 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.73s 2025-09-23 07:47:42.629982 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.72s 2025-09-23 07:47:42.629992 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.72s 2025-09-23 07:47:42.630001 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.65s 2025-09-23 07:47:42.630010 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.64s 2025-09-23 07:47:42.630067 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.61s 2025-09-23 07:47:42.630077 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.60s 2025-09-23 07:47:42.630086 | orchestrator | ceph-facts : Set_fact fsid ---------------------------------------------- 0.56s 2025-09-23 07:47:42.630096 | orchestrator | 2025-09-23 07:47:42 | INFO  | Task 5bc57163-75c5-4eb3-a4db-0bfdd4cc59f7 is in state STARTED 2025-09-23 07:47:42.630106 | orchestrator | 2025-09-23 07:47:42 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:47:45.672890 | orchestrator | 2025-09-23 07:47:45 | INFO  | Task f4fe6d37-0c40-4208-910b-c338f2848c96 is in state STARTED 2025-09-23 07:47:45.674221 | orchestrator | 2025-09-23 07:47:45 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:47:45.675728 | orchestrator | 2025-09-23 07:47:45 | INFO  | Task 5bc57163-75c5-4eb3-a4db-0bfdd4cc59f7 is in state STARTED 2025-09-23 07:47:45.675907 | orchestrator | 2025-09-23 07:47:45 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:47:48.731540 | orchestrator | 2025-09-23 07:47:48 | INFO  | Task f4fe6d37-0c40-4208-910b-c338f2848c96 is in state STARTED 2025-09-23 07:47:48.734566 | orchestrator | 2025-09-23 07:47:48 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:47:48.735629 | orchestrator | 2025-09-23 07:47:48 | INFO  | Task 5bc57163-75c5-4eb3-a4db-0bfdd4cc59f7 is in state STARTED 2025-09-23 07:47:48.736303 | orchestrator | 2025-09-23 07:47:48 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:47:51.772200 | orchestrator | 2025-09-23 07:47:51 | INFO  | Task f4fe6d37-0c40-4208-910b-c338f2848c96 is in state STARTED 2025-09-23 07:47:51.772889 | orchestrator | 2025-09-23 07:47:51 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:47:51.774154 | orchestrator | 2025-09-23 07:47:51 | INFO  | Task 5bc57163-75c5-4eb3-a4db-0bfdd4cc59f7 is in state STARTED 2025-09-23 07:47:51.774715 | orchestrator | 2025-09-23 07:47:51 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:47:54.823369 | orchestrator | 2025-09-23 07:47:54 | INFO  | Task f4fe6d37-0c40-4208-910b-c338f2848c96 is in state STARTED 2025-09-23 07:47:54.826187 | orchestrator | 2025-09-23 07:47:54 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:47:54.828366 | orchestrator | 2025-09-23 07:47:54 | INFO  | Task 5bc57163-75c5-4eb3-a4db-0bfdd4cc59f7 is in state STARTED 2025-09-23 07:47:54.828397 | orchestrator | 2025-09-23 07:47:54 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:47:57.883371 | orchestrator | 2025-09-23 07:47:57 | INFO  | Task f4fe6d37-0c40-4208-910b-c338f2848c96 is in state STARTED 2025-09-23 07:47:57.884815 | orchestrator | 2025-09-23 07:47:57 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:47:57.886708 | orchestrator | 2025-09-23 07:47:57 | INFO  | Task 5bc57163-75c5-4eb3-a4db-0bfdd4cc59f7 is in state STARTED 2025-09-23 07:47:57.886781 | orchestrator | 2025-09-23 07:47:57 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:48:00.942791 | orchestrator | 2025-09-23 07:48:00 | INFO  | Task f4fe6d37-0c40-4208-910b-c338f2848c96 is in state STARTED 2025-09-23 07:48:00.944038 | orchestrator | 2025-09-23 07:48:00 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:48:00.945370 | orchestrator | 2025-09-23 07:48:00 | INFO  | Task 5bc57163-75c5-4eb3-a4db-0bfdd4cc59f7 is in state STARTED 2025-09-23 07:48:00.945422 | orchestrator | 2025-09-23 07:48:00 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:48:03.992178 | orchestrator | 2025-09-23 07:48:03 | INFO  | Task f4fe6d37-0c40-4208-910b-c338f2848c96 is in state STARTED 2025-09-23 07:48:03.992598 | orchestrator | 2025-09-23 07:48:03 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:48:03.993714 | orchestrator | 2025-09-23 07:48:03 | INFO  | Task 5bc57163-75c5-4eb3-a4db-0bfdd4cc59f7 is in state STARTED 2025-09-23 07:48:03.993735 | orchestrator | 2025-09-23 07:48:03 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:48:07.042718 | orchestrator | 2025-09-23 07:48:07 | INFO  | Task f4fe6d37-0c40-4208-910b-c338f2848c96 is in state STARTED 2025-09-23 07:48:07.044174 | orchestrator | 2025-09-23 07:48:07 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:48:07.047377 | orchestrator | 2025-09-23 07:48:07 | INFO  | Task 5bc57163-75c5-4eb3-a4db-0bfdd4cc59f7 is in state STARTED 2025-09-23 07:48:07.047456 | orchestrator | 2025-09-23 07:48:07 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:48:10.089765 | orchestrator | 2025-09-23 07:48:10 | INFO  | Task f4fe6d37-0c40-4208-910b-c338f2848c96 is in state STARTED 2025-09-23 07:48:10.090093 | orchestrator | 2025-09-23 07:48:10 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:48:10.090149 | orchestrator | 2025-09-23 07:48:10 | INFO  | Task 5bc57163-75c5-4eb3-a4db-0bfdd4cc59f7 is in state STARTED 2025-09-23 07:48:10.090169 | orchestrator | 2025-09-23 07:48:10 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:48:13.151163 | orchestrator | 2025-09-23 07:48:13 | INFO  | Task f4fe6d37-0c40-4208-910b-c338f2848c96 is in state SUCCESS 2025-09-23 07:48:13.151821 | orchestrator | 2025-09-23 07:48:13 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:48:13.153767 | orchestrator | 2025-09-23 07:48:13 | INFO  | Task 672c8e17-e22d-4a1d-96c2-c0b3576abe52 is in state STARTED 2025-09-23 07:48:13.155779 | orchestrator | 2025-09-23 07:48:13 | INFO  | Task 5bc57163-75c5-4eb3-a4db-0bfdd4cc59f7 is in state STARTED 2025-09-23 07:48:13.156592 | orchestrator | 2025-09-23 07:48:13 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:48:16.210652 | orchestrator | 2025-09-23 07:48:16 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:48:16.211484 | orchestrator | 2025-09-23 07:48:16 | INFO  | Task 672c8e17-e22d-4a1d-96c2-c0b3576abe52 is in state STARTED 2025-09-23 07:48:16.212280 | orchestrator | 2025-09-23 07:48:16 | INFO  | Task 5bc57163-75c5-4eb3-a4db-0bfdd4cc59f7 is in state STARTED 2025-09-23 07:48:16.212295 | orchestrator | 2025-09-23 07:48:16 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:48:19.254163 | orchestrator | 2025-09-23 07:48:19 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:48:19.255965 | orchestrator | 2025-09-23 07:48:19 | INFO  | Task 672c8e17-e22d-4a1d-96c2-c0b3576abe52 is in state STARTED 2025-09-23 07:48:19.259723 | orchestrator | 2025-09-23 07:48:19 | INFO  | Task 5bc57163-75c5-4eb3-a4db-0bfdd4cc59f7 is in state SUCCESS 2025-09-23 07:48:19.261272 | orchestrator | 2025-09-23 07:48:19.261284 | orchestrator | 2025-09-23 07:48:19.261289 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-09-23 07:48:19.261293 | orchestrator | 2025-09-23 07:48:19.261297 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-09-23 07:48:19.261301 | orchestrator | Tuesday 23 September 2025 07:47:45 +0000 (0:00:00.160) 0:00:00.160 ***** 2025-09-23 07:48:19.261305 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-09-23 07:48:19.261310 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-23 07:48:19.261314 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-23 07:48:19.261317 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-09-23 07:48:19.261321 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-23 07:48:19.261325 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-09-23 07:48:19.261328 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-09-23 07:48:19.261332 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-09-23 07:48:19.261336 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-09-23 07:48:19.261339 | orchestrator | 2025-09-23 07:48:19.261343 | orchestrator | TASK [Create share directory] ************************************************** 2025-09-23 07:48:19.261353 | orchestrator | Tuesday 23 September 2025 07:47:49 +0000 (0:00:04.286) 0:00:04.446 ***** 2025-09-23 07:48:19.261358 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-23 07:48:19.261362 | orchestrator | 2025-09-23 07:48:19.261365 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-09-23 07:48:19.261369 | orchestrator | Tuesday 23 September 2025 07:47:50 +0000 (0:00:01.018) 0:00:05.464 ***** 2025-09-23 07:48:19.261373 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-09-23 07:48:19.261377 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-23 07:48:19.261380 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-23 07:48:19.261384 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-09-23 07:48:19.261388 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-23 07:48:19.261392 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-09-23 07:48:19.261396 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-09-23 07:48:19.261399 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-09-23 07:48:19.261403 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-09-23 07:48:19.261407 | orchestrator | 2025-09-23 07:48:19.261410 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-09-23 07:48:19.261414 | orchestrator | Tuesday 23 September 2025 07:48:03 +0000 (0:00:13.211) 0:00:18.676 ***** 2025-09-23 07:48:19.261418 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-09-23 07:48:19.261422 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-23 07:48:19.261426 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-23 07:48:19.261429 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-09-23 07:48:19.261438 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-23 07:48:19.261442 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-09-23 07:48:19.261446 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-09-23 07:48:19.261450 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-09-23 07:48:19.261453 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-09-23 07:48:19.261457 | orchestrator | 2025-09-23 07:48:19.261461 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:48:19.261464 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:48:19.261469 | orchestrator | 2025-09-23 07:48:19.261473 | orchestrator | 2025-09-23 07:48:19.261476 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:48:19.261480 | orchestrator | Tuesday 23 September 2025 07:48:10 +0000 (0:00:06.684) 0:00:25.360 ***** 2025-09-23 07:48:19.261484 | orchestrator | =============================================================================== 2025-09-23 07:48:19.261487 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.21s 2025-09-23 07:48:19.261491 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.68s 2025-09-23 07:48:19.261495 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.29s 2025-09-23 07:48:19.261498 | orchestrator | Create share directory -------------------------------------------------- 1.02s 2025-09-23 07:48:19.261502 | orchestrator | 2025-09-23 07:48:19.261506 | orchestrator | 2025-09-23 07:48:19.261509 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-23 07:48:19.261513 | orchestrator | 2025-09-23 07:48:19.261522 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-23 07:48:19.261526 | orchestrator | Tuesday 23 September 2025 07:46:37 +0000 (0:00:00.237) 0:00:00.237 ***** 2025-09-23 07:48:19.261530 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:48:19.261534 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:48:19.261538 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:48:19.261567 | orchestrator | 2025-09-23 07:48:19.261573 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-23 07:48:19.261577 | orchestrator | Tuesday 23 September 2025 07:46:37 +0000 (0:00:00.275) 0:00:00.513 ***** 2025-09-23 07:48:19.261580 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-09-23 07:48:19.261584 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-09-23 07:48:19.261588 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-09-23 07:48:19.261591 | orchestrator | 2025-09-23 07:48:19.261595 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-09-23 07:48:19.261599 | orchestrator | 2025-09-23 07:48:19.261603 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-23 07:48:19.261606 | orchestrator | Tuesday 23 September 2025 07:46:38 +0000 (0:00:00.348) 0:00:00.861 ***** 2025-09-23 07:48:19.261610 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:48:19.261614 | orchestrator | 2025-09-23 07:48:19.261617 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-09-23 07:48:19.261694 | orchestrator | Tuesday 23 September 2025 07:46:38 +0000 (0:00:00.448) 0:00:01.310 ***** 2025-09-23 07:48:19.261704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-23 07:48:19.261720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-23 07:48:19.261725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-23 07:48:19.261732 | orchestrator | 2025-09-23 07:48:19.261736 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-09-23 07:48:19.261740 | orchestrator | Tuesday 23 September 2025 07:46:39 +0000 (0:00:00.993) 0:00:02.303 ***** 2025-09-23 07:48:19.261744 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:48:19.261748 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:48:19.261752 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:48:19.261756 | orchestrator | 2025-09-23 07:48:19.261760 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-23 07:48:19.261763 | orchestrator | Tuesday 23 September 2025 07:46:39 +0000 (0:00:00.366) 0:00:02.670 ***** 2025-09-23 07:48:19.261767 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-23 07:48:19.261771 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-23 07:48:19.261778 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-09-23 07:48:19.261782 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-09-23 07:48:19.261786 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-09-23 07:48:19.261789 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-09-23 07:48:19.261793 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-09-23 07:48:19.261797 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-09-23 07:48:19.261801 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-23 07:48:19.261805 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-23 07:48:19.261809 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-09-23 07:48:19.261813 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-09-23 07:48:19.261819 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-09-23 07:48:19.261823 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-09-23 07:48:19.261827 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-09-23 07:48:19.261831 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-09-23 07:48:19.261836 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-23 07:48:19.261840 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-23 07:48:19.261844 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-09-23 07:48:19.261848 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-09-23 07:48:19.261852 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-09-23 07:48:19.261855 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-09-23 07:48:19.261859 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-09-23 07:48:19.261863 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-09-23 07:48:19.261867 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-09-23 07:48:19.261872 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-09-23 07:48:19.261876 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-09-23 07:48:19.261879 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-09-23 07:48:19.261883 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-09-23 07:48:19.261887 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-09-23 07:48:19.261891 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-09-23 07:48:19.261895 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-09-23 07:48:19.261898 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-09-23 07:48:19.261902 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-09-23 07:48:19.261906 | orchestrator | 2025-09-23 07:48:19.261910 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-23 07:48:19.261914 | orchestrator | Tuesday 23 September 2025 07:46:40 +0000 (0:00:00.725) 0:00:03.395 ***** 2025-09-23 07:48:19.261917 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:48:19.261921 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:48:19.261925 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:48:19.261929 | orchestrator | 2025-09-23 07:48:19.261933 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-23 07:48:19.261936 | orchestrator | Tuesday 23 September 2025 07:46:41 +0000 (0:00:00.312) 0:00:03.708 ***** 2025-09-23 07:48:19.261940 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:48:19.261944 | orchestrator | 2025-09-23 07:48:19.261950 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-23 07:48:19.261956 | orchestrator | Tuesday 23 September 2025 07:46:41 +0000 (0:00:00.133) 0:00:03.842 ***** 2025-09-23 07:48:19.261960 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:48:19.261964 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:48:19.261967 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:48:19.261971 | orchestrator | 2025-09-23 07:48:19.261975 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-23 07:48:19.261979 | orchestrator | Tuesday 23 September 2025 07:46:41 +0000 (0:00:00.437) 0:00:04.279 ***** 2025-09-23 07:48:19.261983 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:48:19.261986 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:48:19.261990 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:48:19.261994 | orchestrator | 2025-09-23 07:48:19.261998 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-23 07:48:19.262002 | orchestrator | Tuesday 23 September 2025 07:46:41 +0000 (0:00:00.298) 0:00:04.578 ***** 2025-09-23 07:48:19.262005 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:48:19.262009 | orchestrator | 2025-09-23 07:48:19.262031 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-23 07:48:19.262036 | orchestrator | Tuesday 23 September 2025 07:46:42 +0000 (0:00:00.140) 0:00:04.718 ***** 2025-09-23 07:48:19.262040 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:48:19.262044 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:48:19.262048 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:48:19.262052 | orchestrator | 2025-09-23 07:48:19.262056 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-23 07:48:19.262060 | orchestrator | Tuesday 23 September 2025 07:46:42 +0000 (0:00:00.293) 0:00:05.012 ***** 2025-09-23 07:48:19.262064 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:48:19.262068 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:48:19.262072 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:48:19.262076 | orchestrator | 2025-09-23 07:48:19.262080 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-23 07:48:19.262086 | orchestrator | Tuesday 23 September 2025 07:46:42 +0000 (0:00:00.300) 0:00:05.312 ***** 2025-09-23 07:48:19.262090 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:48:19.262094 | orchestrator | 2025-09-23 07:48:19.262098 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-23 07:48:19.262102 | orchestrator | Tuesday 23 September 2025 07:46:42 +0000 (0:00:00.132) 0:00:05.445 ***** 2025-09-23 07:48:19.262106 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:48:19.262110 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:48:19.262114 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:48:19.262118 | orchestrator | 2025-09-23 07:48:19.262122 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-23 07:48:19.262126 | orchestrator | Tuesday 23 September 2025 07:46:43 +0000 (0:00:00.534) 0:00:05.979 ***** 2025-09-23 07:48:19.262130 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:48:19.262135 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:48:19.262139 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:48:19.262143 | orchestrator | 2025-09-23 07:48:19.262147 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-23 07:48:19.262151 | orchestrator | Tuesday 23 September 2025 07:46:43 +0000 (0:00:00.377) 0:00:06.357 ***** 2025-09-23 07:48:19.262155 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:48:19.262159 | orchestrator | 2025-09-23 07:48:19.262163 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-23 07:48:19.262167 | orchestrator | Tuesday 23 September 2025 07:46:43 +0000 (0:00:00.139) 0:00:06.496 ***** 2025-09-23 07:48:19.262171 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:48:19.262175 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:48:19.262179 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:48:19.262182 | orchestrator | 2025-09-23 07:48:19.262186 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-23 07:48:19.262194 | orchestrator | Tuesday 23 September 2025 07:46:44 +0000 (0:00:00.301) 0:00:06.798 ***** 2025-09-23 07:48:19.262198 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:48:19.262202 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:48:19.262206 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:48:19.262210 | orchestrator | 2025-09-23 07:48:19.262214 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-23 07:48:19.262218 | orchestrator | Tuesday 23 September 2025 07:46:44 +0000 (0:00:00.333) 0:00:07.131 ***** 2025-09-23 07:48:19.262234 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:48:19.262238 | orchestrator | 2025-09-23 07:48:19.262242 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-23 07:48:19.262245 | orchestrator | Tuesday 23 September 2025 07:46:44 +0000 (0:00:00.360) 0:00:07.491 ***** 2025-09-23 07:48:19.262249 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:48:19.262253 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:48:19.262257 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:48:19.262260 | orchestrator | 2025-09-23 07:48:19.262264 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-23 07:48:19.262268 | orchestrator | Tuesday 23 September 2025 07:46:45 +0000 (0:00:00.323) 0:00:07.815 ***** 2025-09-23 07:48:19.262272 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:48:19.262276 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:48:19.262279 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:48:19.262283 | orchestrator | 2025-09-23 07:48:19.262287 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-23 07:48:19.262291 | orchestrator | Tuesday 23 September 2025 07:46:45 +0000 (0:00:00.322) 0:00:08.138 ***** 2025-09-23 07:48:19.262295 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:48:19.262298 | orchestrator | 2025-09-23 07:48:19.262302 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-23 07:48:19.262306 | orchestrator | Tuesday 23 September 2025 07:46:45 +0000 (0:00:00.121) 0:00:08.260 ***** 2025-09-23 07:48:19.262310 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:48:19.262313 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:48:19.262317 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:48:19.262321 | orchestrator | 2025-09-23 07:48:19.262325 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-23 07:48:19.262329 | orchestrator | Tuesday 23 September 2025 07:46:45 +0000 (0:00:00.305) 0:00:08.565 ***** 2025-09-23 07:48:19.262333 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:48:19.262337 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:48:19.262342 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:48:19.262346 | orchestrator | 2025-09-23 07:48:19.262353 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-23 07:48:19.262357 | orchestrator | Tuesday 23 September 2025 07:46:46 +0000 (0:00:00.519) 0:00:09.085 ***** 2025-09-23 07:48:19.262361 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:48:19.262365 | orchestrator | 2025-09-23 07:48:19.262370 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-23 07:48:19.262374 | orchestrator | Tuesday 23 September 2025 07:46:46 +0000 (0:00:00.139) 0:00:09.225 ***** 2025-09-23 07:48:19.262379 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:48:19.262383 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:48:19.262387 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:48:19.262391 | orchestrator | 2025-09-23 07:48:19.262396 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-23 07:48:19.262400 | orchestrator | Tuesday 23 September 2025 07:46:46 +0000 (0:00:00.306) 0:00:09.531 ***** 2025-09-23 07:48:19.262405 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:48:19.262409 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:48:19.262413 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:48:19.262417 | orchestrator | 2025-09-23 07:48:19.262421 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-23 07:48:19.262427 | orchestrator | Tuesday 23 September 2025 07:46:47 +0000 (0:00:00.347) 0:00:09.879 ***** 2025-09-23 07:48:19.262431 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:48:19.262435 | orchestrator | 2025-09-23 07:48:19.262438 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-23 07:48:19.262442 | orchestrator | Tuesday 23 September 2025 07:46:47 +0000 (0:00:00.127) 0:00:10.006 ***** 2025-09-23 07:48:19.262446 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:48:19.262450 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:48:19.262453 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:48:19.262457 | orchestrator | 2025-09-23 07:48:19.262463 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-23 07:48:19.262466 | orchestrator | Tuesday 23 September 2025 07:46:47 +0000 (0:00:00.294) 0:00:10.301 ***** 2025-09-23 07:48:19.262470 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:48:19.262474 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:48:19.262478 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:48:19.262481 | orchestrator | 2025-09-23 07:48:19.262485 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-23 07:48:19.262489 | orchestrator | Tuesday 23 September 2025 07:46:48 +0000 (0:00:00.526) 0:00:10.827 ***** 2025-09-23 07:48:19.262493 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:48:19.262496 | orchestrator | 2025-09-23 07:48:19.262500 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-23 07:48:19.262504 | orchestrator | Tuesday 23 September 2025 07:46:48 +0000 (0:00:00.124) 0:00:10.951 ***** 2025-09-23 07:48:19.262507 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:48:19.262511 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:48:19.262515 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:48:19.262519 | orchestrator | 2025-09-23 07:48:19.262522 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-23 07:48:19.262526 | orchestrator | Tuesday 23 September 2025 07:46:48 +0000 (0:00:00.275) 0:00:11.226 ***** 2025-09-23 07:48:19.262530 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:48:19.262534 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:48:19.262537 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:48:19.262541 | orchestrator | 2025-09-23 07:48:19.262545 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-23 07:48:19.262549 | orchestrator | Tuesday 23 September 2025 07:46:48 +0000 (0:00:00.331) 0:00:11.557 ***** 2025-09-23 07:48:19.262552 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:48:19.262556 | orchestrator | 2025-09-23 07:48:19.262560 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-23 07:48:19.262564 | orchestrator | Tuesday 23 September 2025 07:46:49 +0000 (0:00:00.142) 0:00:11.700 ***** 2025-09-23 07:48:19.262567 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:48:19.262571 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:48:19.262575 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:48:19.262579 | orchestrator | 2025-09-23 07:48:19.262582 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-09-23 07:48:19.262586 | orchestrator | Tuesday 23 September 2025 07:46:49 +0000 (0:00:00.518) 0:00:12.218 ***** 2025-09-23 07:48:19.262590 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:48:19.262593 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:48:19.262597 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:48:19.262601 | orchestrator | 2025-09-23 07:48:19.262605 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-09-23 07:48:19.262608 | orchestrator | Tuesday 23 September 2025 07:46:51 +0000 (0:00:01.674) 0:00:13.892 ***** 2025-09-23 07:48:19.262612 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-23 07:48:19.262616 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-23 07:48:19.262620 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-23 07:48:19.262628 | orchestrator | 2025-09-23 07:48:19.262632 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-09-23 07:48:19.262636 | orchestrator | Tuesday 23 September 2025 07:46:52 +0000 (0:00:01.631) 0:00:15.524 ***** 2025-09-23 07:48:19.262639 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-23 07:48:19.262643 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-23 07:48:19.262647 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-23 07:48:19.262650 | orchestrator | 2025-09-23 07:48:19.262654 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-09-23 07:48:19.262658 | orchestrator | Tuesday 23 September 2025 07:46:54 +0000 (0:00:01.920) 0:00:17.445 ***** 2025-09-23 07:48:19.262664 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-23 07:48:19.262668 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-23 07:48:19.262672 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-23 07:48:19.262675 | orchestrator | 2025-09-23 07:48:19.262679 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-09-23 07:48:19.262683 | orchestrator | Tuesday 23 September 2025 07:46:56 +0000 (0:00:01.964) 0:00:19.410 ***** 2025-09-23 07:48:19.262687 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:48:19.262690 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:48:19.262694 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:48:19.262698 | orchestrator | 2025-09-23 07:48:19.262701 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-09-23 07:48:19.262705 | orchestrator | Tuesday 23 September 2025 07:46:57 +0000 (0:00:00.284) 0:00:19.694 ***** 2025-09-23 07:48:19.262709 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:48:19.262713 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:48:19.262716 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:48:19.262720 | orchestrator | 2025-09-23 07:48:19.262724 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-23 07:48:19.262728 | orchestrator | Tuesday 23 September 2025 07:46:57 +0000 (0:00:00.301) 0:00:19.996 ***** 2025-09-23 07:48:19.262732 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:48:19.262735 | orchestrator | 2025-09-23 07:48:19.262739 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-09-23 07:48:19.262745 | orchestrator | Tuesday 23 September 2025 07:46:57 +0000 (0:00:00.574) 0:00:20.570 ***** 2025-09-23 07:48:19.262749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-23 07:48:19.262762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-23 07:48:19.262767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-23 07:48:19.262773 | orchestrator | 2025-09-23 07:48:19.262777 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-09-23 07:48:19.262781 | orchestrator | Tuesday 23 September 2025 07:46:59 +0000 (0:00:01.724) 0:00:22.295 ***** 2025-09-23 07:48:19.262791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-23 07:48:19.262795 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:48:19.262799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-23 07:48:19.262808 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:48:19.262814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-23 07:48:19.262821 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:48:19.262825 | orchestrator | 2025-09-23 07:48:19.262829 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-09-23 07:48:19.262832 | orchestrator | Tuesday 23 September 2025 07:47:00 +0000 (0:00:00.630) 0:00:22.926 ***** 2025-09-23 07:48:19.262840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-23 07:48:19.262844 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:48:19.262850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-23 07:48:19.262857 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:48:19.262864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-23 07:48:19.262868 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:48:19.262872 | orchestrator | 2025-09-23 07:48:19.262876 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-09-23 07:48:19.262880 | orchestrator | Tuesday 23 September 2025 07:47:00 +0000 (0:00:00.702) 0:00:23.628 ***** 2025-09-23 07:48:19.262886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-23 07:48:19.262895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-23 07:48:19.262902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-23 07:48:19.262908 | orchestrator | 2025-09-23 07:48:19.262912 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-23 07:48:19.262916 | orchestrator | Tuesday 23 September 2025 07:47:02 +0000 (0:00:01.266) 0:00:24.894 ***** 2025-09-23 07:48:19.262920 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:48:19.262923 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:48:19.262927 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:48:19.262931 | orchestrator | 2025-09-23 07:48:19.262935 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-23 07:48:19.262938 | orchestrator | Tuesday 23 September 2025 07:47:02 +0000 (0:00:00.263) 0:00:25.158 ***** 2025-09-23 07:48:19.262942 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:48:19.262946 | orchestrator | 2025-09-23 07:48:19.262950 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-09-23 07:48:19.262953 | orchestrator | Tuesday 23 September 2025 07:47:02 +0000 (0:00:00.506) 0:00:25.664 ***** 2025-09-23 07:48:19.262957 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:48:19.262961 | orchestrator | 2025-09-23 07:48:19.262966 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-09-23 07:48:19.262970 | orchestrator | Tuesday 23 September 2025 07:47:05 +0000 (0:00:02.173) 0:00:27.837 ***** 2025-09-23 07:48:19.262974 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:48:19.262977 | orchestrator | 2025-09-23 07:48:19.262981 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-09-23 07:48:19.262985 | orchestrator | Tuesday 23 September 2025 07:47:07 +0000 (0:00:02.555) 0:00:30.393 ***** 2025-09-23 07:48:19.262989 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:48:19.262992 | orchestrator | 2025-09-23 07:48:19.262996 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-23 07:48:19.263000 | orchestrator | Tuesday 23 September 2025 07:47:23 +0000 (0:00:15.430) 0:00:45.824 ***** 2025-09-23 07:48:19.263003 | orchestrator | 2025-09-23 07:48:19.263007 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-23 07:48:19.263011 | orchestrator | Tuesday 23 September 2025 07:47:23 +0000 (0:00:00.069) 0:00:45.893 ***** 2025-09-23 07:48:19.263014 | orchestrator | 2025-09-23 07:48:19.263018 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-23 07:48:19.263022 | orchestrator | Tuesday 23 September 2025 07:47:23 +0000 (0:00:00.060) 0:00:45.954 ***** 2025-09-23 07:48:19.263029 | orchestrator | 2025-09-23 07:48:19.263033 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-09-23 07:48:19.263036 | orchestrator | Tuesday 23 September 2025 07:47:23 +0000 (0:00:00.065) 0:00:46.020 ***** 2025-09-23 07:48:19.263040 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:48:19.263044 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:48:19.263048 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:48:19.263051 | orchestrator | 2025-09-23 07:48:19.263055 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:48:19.263061 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-09-23 07:48:19.263064 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-23 07:48:19.263068 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-23 07:48:19.263072 | orchestrator | 2025-09-23 07:48:19.263076 | orchestrator | 2025-09-23 07:48:19.263080 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:48:19.263083 | orchestrator | Tuesday 23 September 2025 07:48:18 +0000 (0:00:54.986) 0:01:41.006 ***** 2025-09-23 07:48:19.263087 | orchestrator | =============================================================================== 2025-09-23 07:48:19.263091 | orchestrator | horizon : Restart horizon container ------------------------------------ 54.99s 2025-09-23 07:48:19.263095 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.43s 2025-09-23 07:48:19.263098 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.56s 2025-09-23 07:48:19.263102 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.17s 2025-09-23 07:48:19.263106 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.96s 2025-09-23 07:48:19.263109 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.92s 2025-09-23 07:48:19.263113 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.72s 2025-09-23 07:48:19.263117 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.67s 2025-09-23 07:48:19.263120 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.63s 2025-09-23 07:48:19.263124 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.27s 2025-09-23 07:48:19.263128 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 0.99s 2025-09-23 07:48:19.263132 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.73s 2025-09-23 07:48:19.263135 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.70s 2025-09-23 07:48:19.263139 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.63s 2025-09-23 07:48:19.263143 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.57s 2025-09-23 07:48:19.263147 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.53s 2025-09-23 07:48:19.263150 | orchestrator | horizon : Update policy file name --------------------------------------- 0.53s 2025-09-23 07:48:19.263154 | orchestrator | horizon : Update policy file name --------------------------------------- 0.52s 2025-09-23 07:48:19.263158 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.52s 2025-09-23 07:48:19.263162 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.51s 2025-09-23 07:48:19.263165 | orchestrator | 2025-09-23 07:48:19 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:48:22.294861 | orchestrator | 2025-09-23 07:48:22 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:48:22.296545 | orchestrator | 2025-09-23 07:48:22 | INFO  | Task 672c8e17-e22d-4a1d-96c2-c0b3576abe52 is in state STARTED 2025-09-23 07:48:22.296600 | orchestrator | 2025-09-23 07:48:22 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:48:25.337812 | orchestrator | 2025-09-23 07:48:25 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:48:25.339253 | orchestrator | 2025-09-23 07:48:25 | INFO  | Task 672c8e17-e22d-4a1d-96c2-c0b3576abe52 is in state STARTED 2025-09-23 07:48:25.339303 | orchestrator | 2025-09-23 07:48:25 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:48:28.378739 | orchestrator | 2025-09-23 07:48:28 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:48:28.380284 | orchestrator | 2025-09-23 07:48:28 | INFO  | Task 672c8e17-e22d-4a1d-96c2-c0b3576abe52 is in state STARTED 2025-09-23 07:48:28.380349 | orchestrator | 2025-09-23 07:48:28 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:48:31.419396 | orchestrator | 2025-09-23 07:48:31 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:48:31.420514 | orchestrator | 2025-09-23 07:48:31 | INFO  | Task 672c8e17-e22d-4a1d-96c2-c0b3576abe52 is in state STARTED 2025-09-23 07:48:31.420547 | orchestrator | 2025-09-23 07:48:31 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:48:34.455984 | orchestrator | 2025-09-23 07:48:34 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:48:34.457428 | orchestrator | 2025-09-23 07:48:34 | INFO  | Task 672c8e17-e22d-4a1d-96c2-c0b3576abe52 is in state STARTED 2025-09-23 07:48:34.457467 | orchestrator | 2025-09-23 07:48:34 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:48:37.501405 | orchestrator | 2025-09-23 07:48:37 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:48:37.503105 | orchestrator | 2025-09-23 07:48:37 | INFO  | Task 672c8e17-e22d-4a1d-96c2-c0b3576abe52 is in state STARTED 2025-09-23 07:48:37.503147 | orchestrator | 2025-09-23 07:48:37 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:48:40.553425 | orchestrator | 2025-09-23 07:48:40 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:48:40.562237 | orchestrator | 2025-09-23 07:48:40 | INFO  | Task 672c8e17-e22d-4a1d-96c2-c0b3576abe52 is in state STARTED 2025-09-23 07:48:40.562318 | orchestrator | 2025-09-23 07:48:40 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:48:43.612061 | orchestrator | 2025-09-23 07:48:43 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:48:43.614474 | orchestrator | 2025-09-23 07:48:43 | INFO  | Task 672c8e17-e22d-4a1d-96c2-c0b3576abe52 is in state STARTED 2025-09-23 07:48:43.614542 | orchestrator | 2025-09-23 07:48:43 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:48:46.666609 | orchestrator | 2025-09-23 07:48:46 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:48:46.667974 | orchestrator | 2025-09-23 07:48:46 | INFO  | Task 672c8e17-e22d-4a1d-96c2-c0b3576abe52 is in state STARTED 2025-09-23 07:48:46.668014 | orchestrator | 2025-09-23 07:48:46 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:48:49.713666 | orchestrator | 2025-09-23 07:48:49 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:48:49.715171 | orchestrator | 2025-09-23 07:48:49 | INFO  | Task 672c8e17-e22d-4a1d-96c2-c0b3576abe52 is in state STARTED 2025-09-23 07:48:49.715261 | orchestrator | 2025-09-23 07:48:49 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:48:52.760541 | orchestrator | 2025-09-23 07:48:52 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:48:52.762216 | orchestrator | 2025-09-23 07:48:52 | INFO  | Task 672c8e17-e22d-4a1d-96c2-c0b3576abe52 is in state STARTED 2025-09-23 07:48:52.762248 | orchestrator | 2025-09-23 07:48:52 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:48:55.810932 | orchestrator | 2025-09-23 07:48:55 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:48:55.813428 | orchestrator | 2025-09-23 07:48:55 | INFO  | Task 672c8e17-e22d-4a1d-96c2-c0b3576abe52 is in state STARTED 2025-09-23 07:48:55.813461 | orchestrator | 2025-09-23 07:48:55 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:48:58.863959 | orchestrator | 2025-09-23 07:48:58 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:48:58.864057 | orchestrator | 2025-09-23 07:48:58 | INFO  | Task 672c8e17-e22d-4a1d-96c2-c0b3576abe52 is in state STARTED 2025-09-23 07:48:58.864072 | orchestrator | 2025-09-23 07:48:58 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:49:01.912058 | orchestrator | 2025-09-23 07:49:01 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:49:01.914252 | orchestrator | 2025-09-23 07:49:01 | INFO  | Task 672c8e17-e22d-4a1d-96c2-c0b3576abe52 is in state STARTED 2025-09-23 07:49:01.915128 | orchestrator | 2025-09-23 07:49:01 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:49:04.962375 | orchestrator | 2025-09-23 07:49:04 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:49:04.962473 | orchestrator | 2025-09-23 07:49:04 | INFO  | Task 672c8e17-e22d-4a1d-96c2-c0b3576abe52 is in state STARTED 2025-09-23 07:49:04.962487 | orchestrator | 2025-09-23 07:49:04 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:49:08.016108 | orchestrator | 2025-09-23 07:49:08 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:49:08.016889 | orchestrator | 2025-09-23 07:49:08 | INFO  | Task bd199535-5296-48dd-b502-a7cbb350f1a0 is in state STARTED 2025-09-23 07:49:08.019153 | orchestrator | 2025-09-23 07:49:08 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:49:08.020300 | orchestrator | 2025-09-23 07:49:08 | INFO  | Task 672c8e17-e22d-4a1d-96c2-c0b3576abe52 is in state SUCCESS 2025-09-23 07:49:08.021632 | orchestrator | 2025-09-23 07:49:08 | INFO  | Task 40a8ec7f-4d5a-4bf9-bc0f-1586d9477e35 is in state STARTED 2025-09-23 07:49:08.021655 | orchestrator | 2025-09-23 07:49:08 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:49:11.078767 | orchestrator | 2025-09-23 07:49:11 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state STARTED 2025-09-23 07:49:11.079389 | orchestrator | 2025-09-23 07:49:11 | INFO  | Task bd199535-5296-48dd-b502-a7cbb350f1a0 is in state STARTED 2025-09-23 07:49:11.081048 | orchestrator | 2025-09-23 07:49:11 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:49:11.082405 | orchestrator | 2025-09-23 07:49:11 | INFO  | Task 40a8ec7f-4d5a-4bf9-bc0f-1586d9477e35 is in state STARTED 2025-09-23 07:49:11.082463 | orchestrator | 2025-09-23 07:49:11 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:49:14.121854 | orchestrator | 2025-09-23 07:49:14 | INFO  | Task d3ed8a12-3e2b-4584-a03d-4559624decd8 is in state SUCCESS 2025-09-23 07:49:14.123590 | orchestrator | 2025-09-23 07:49:14.123640 | orchestrator | 2025-09-23 07:49:14.123654 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-09-23 07:49:14.123666 | orchestrator | 2025-09-23 07:49:14.123677 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-09-23 07:49:14.123719 | orchestrator | Tuesday 23 September 2025 07:48:14 +0000 (0:00:00.238) 0:00:00.238 ***** 2025-09-23 07:49:14.123731 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-09-23 07:49:14.123744 | orchestrator | 2025-09-23 07:49:14.123755 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-09-23 07:49:14.123766 | orchestrator | Tuesday 23 September 2025 07:48:15 +0000 (0:00:00.236) 0:00:00.475 ***** 2025-09-23 07:49:14.123777 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-09-23 07:49:14.123789 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-09-23 07:49:14.123800 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-09-23 07:49:14.123812 | orchestrator | 2025-09-23 07:49:14.123823 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-09-23 07:49:14.123833 | orchestrator | Tuesday 23 September 2025 07:48:16 +0000 (0:00:01.283) 0:00:01.758 ***** 2025-09-23 07:49:14.123844 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-09-23 07:49:14.123855 | orchestrator | 2025-09-23 07:49:14.123865 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-09-23 07:49:14.123876 | orchestrator | Tuesday 23 September 2025 07:48:17 +0000 (0:00:01.024) 0:00:02.783 ***** 2025-09-23 07:49:14.123886 | orchestrator | changed: [testbed-manager] 2025-09-23 07:49:14.123897 | orchestrator | 2025-09-23 07:49:14.123908 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-09-23 07:49:14.123918 | orchestrator | Tuesday 23 September 2025 07:48:18 +0000 (0:00:00.972) 0:00:03.756 ***** 2025-09-23 07:49:14.123929 | orchestrator | changed: [testbed-manager] 2025-09-23 07:49:14.123940 | orchestrator | 2025-09-23 07:49:14.123950 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-09-23 07:49:14.123961 | orchestrator | Tuesday 23 September 2025 07:48:19 +0000 (0:00:00.836) 0:00:04.592 ***** 2025-09-23 07:49:14.123971 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-09-23 07:49:14.123982 | orchestrator | ok: [testbed-manager] 2025-09-23 07:49:14.123993 | orchestrator | 2025-09-23 07:49:14.124003 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-09-23 07:49:14.124014 | orchestrator | Tuesday 23 September 2025 07:48:55 +0000 (0:00:36.116) 0:00:40.708 ***** 2025-09-23 07:49:14.124025 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-09-23 07:49:14.124035 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-09-23 07:49:14.124046 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-09-23 07:49:14.124057 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-09-23 07:49:14.124067 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-09-23 07:49:14.124078 | orchestrator | 2025-09-23 07:49:14.124088 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-09-23 07:49:14.124099 | orchestrator | Tuesday 23 September 2025 07:48:59 +0000 (0:00:04.244) 0:00:44.952 ***** 2025-09-23 07:49:14.124109 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-09-23 07:49:14.124120 | orchestrator | 2025-09-23 07:49:14.124131 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-09-23 07:49:14.124141 | orchestrator | Tuesday 23 September 2025 07:49:00 +0000 (0:00:00.470) 0:00:45.422 ***** 2025-09-23 07:49:14.124152 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:49:14.124164 | orchestrator | 2025-09-23 07:49:14.124200 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-09-23 07:49:14.124213 | orchestrator | Tuesday 23 September 2025 07:49:00 +0000 (0:00:00.128) 0:00:45.551 ***** 2025-09-23 07:49:14.124226 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:49:14.124239 | orchestrator | 2025-09-23 07:49:14.124251 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-09-23 07:49:14.124263 | orchestrator | Tuesday 23 September 2025 07:49:00 +0000 (0:00:00.335) 0:00:45.887 ***** 2025-09-23 07:49:14.124285 | orchestrator | changed: [testbed-manager] 2025-09-23 07:49:14.124297 | orchestrator | 2025-09-23 07:49:14.124310 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-09-23 07:49:14.124323 | orchestrator | Tuesday 23 September 2025 07:49:03 +0000 (0:00:03.048) 0:00:48.935 ***** 2025-09-23 07:49:14.124335 | orchestrator | changed: [testbed-manager] 2025-09-23 07:49:14.124348 | orchestrator | 2025-09-23 07:49:14.124360 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-09-23 07:49:14.124373 | orchestrator | Tuesday 23 September 2025 07:49:04 +0000 (0:00:00.778) 0:00:49.714 ***** 2025-09-23 07:49:14.124386 | orchestrator | changed: [testbed-manager] 2025-09-23 07:49:14.124398 | orchestrator | 2025-09-23 07:49:14.124426 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-09-23 07:49:14.124439 | orchestrator | Tuesday 23 September 2025 07:49:05 +0000 (0:00:00.665) 0:00:50.379 ***** 2025-09-23 07:49:14.124452 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-09-23 07:49:14.124464 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-09-23 07:49:14.124477 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-09-23 07:49:14.124489 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-09-23 07:49:14.124501 | orchestrator | 2025-09-23 07:49:14.124514 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:49:14.124527 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 07:49:14.124541 | orchestrator | 2025-09-23 07:49:14.124551 | orchestrator | 2025-09-23 07:49:14.124580 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:49:14.124591 | orchestrator | Tuesday 23 September 2025 07:49:06 +0000 (0:00:01.530) 0:00:51.909 ***** 2025-09-23 07:49:14.124602 | orchestrator | =============================================================================== 2025-09-23 07:49:14.124613 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 36.12s 2025-09-23 07:49:14.124623 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.24s 2025-09-23 07:49:14.124634 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 3.05s 2025-09-23 07:49:14.124644 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.53s 2025-09-23 07:49:14.124655 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.28s 2025-09-23 07:49:14.124667 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.02s 2025-09-23 07:49:14.124677 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.97s 2025-09-23 07:49:14.124688 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.84s 2025-09-23 07:49:14.124699 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.78s 2025-09-23 07:49:14.124710 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.67s 2025-09-23 07:49:14.124720 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.47s 2025-09-23 07:49:14.124731 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.34s 2025-09-23 07:49:14.124742 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.24s 2025-09-23 07:49:14.124752 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2025-09-23 07:49:14.124763 | orchestrator | 2025-09-23 07:49:14.124774 | orchestrator | 2025-09-23 07:49:14.124784 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-23 07:49:14.124795 | orchestrator | 2025-09-23 07:49:14.124805 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-23 07:49:14.124816 | orchestrator | Tuesday 23 September 2025 07:46:37 +0000 (0:00:00.258) 0:00:00.258 ***** 2025-09-23 07:49:14.124827 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:49:14.124847 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:49:14.124857 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:49:14.124868 | orchestrator | 2025-09-23 07:49:14.124879 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-23 07:49:14.124890 | orchestrator | Tuesday 23 September 2025 07:46:37 +0000 (0:00:00.245) 0:00:00.504 ***** 2025-09-23 07:49:14.124900 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-23 07:49:14.124911 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-23 07:49:14.124922 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-23 07:49:14.124933 | orchestrator | 2025-09-23 07:49:14.124944 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-09-23 07:49:14.124954 | orchestrator | 2025-09-23 07:49:14.124965 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-23 07:49:14.124976 | orchestrator | Tuesday 23 September 2025 07:46:38 +0000 (0:00:00.381) 0:00:00.885 ***** 2025-09-23 07:49:14.124986 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:49:14.124998 | orchestrator | 2025-09-23 07:49:14.125009 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-09-23 07:49:14.125020 | orchestrator | Tuesday 23 September 2025 07:46:38 +0000 (0:00:00.479) 0:00:01.365 ***** 2025-09-23 07:49:14.125041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-23 07:49:14.125122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-23 07:49:14.125138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-23 07:49:14.125159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-23 07:49:14.125172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-23 07:49:14.125205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-23 07:49:14.125256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-23 07:49:14.125278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-23 07:49:14.125290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-23 07:49:14.125308 | orchestrator | 2025-09-23 07:49:14.125320 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-09-23 07:49:14.125331 | orchestrator | Tuesday 23 September 2025 07:46:40 +0000 (0:00:01.697) 0:00:03.063 ***** 2025-09-23 07:49:14.125342 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-09-23 07:49:14.125353 | orchestrator | 2025-09-23 07:49:14.125364 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-09-23 07:49:14.125375 | orchestrator | Tuesday 23 September 2025 07:46:41 +0000 (0:00:00.832) 0:00:03.895 ***** 2025-09-23 07:49:14.125386 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:49:14.125397 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:49:14.125408 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:49:14.125418 | orchestrator | 2025-09-23 07:49:14.125429 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-09-23 07:49:14.125440 | orchestrator | Tuesday 23 September 2025 07:46:41 +0000 (0:00:00.502) 0:00:04.397 ***** 2025-09-23 07:49:14.125450 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-23 07:49:14.125461 | orchestrator | 2025-09-23 07:49:14.125472 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-23 07:49:14.125482 | orchestrator | Tuesday 23 September 2025 07:46:42 +0000 (0:00:00.724) 0:00:05.122 ***** 2025-09-23 07:49:14.125493 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:49:14.125504 | orchestrator | 2025-09-23 07:49:14.125514 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-09-23 07:49:14.125525 | orchestrator | Tuesday 23 September 2025 07:46:43 +0000 (0:00:00.587) 0:00:05.710 ***** 2025-09-23 07:49:14.125537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-23 07:49:14.125564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-23 07:49:14.125585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-23 07:49:14.125598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-23 07:49:14.125609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-23 07:49:14.125621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-23 07:49:14.125637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-23 07:49:14.125654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-23 07:49:14.125673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-23 07:49:14.125684 | orchestrator | 2025-09-23 07:49:14.125695 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-09-23 07:49:14.125706 | orchestrator | Tuesday 23 September 2025 07:46:46 +0000 (0:00:03.412) 0:00:09.123 ***** 2025-09-23 07:49:14.125718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-23 07:49:14.125729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-23 07:49:14.125741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-23 07:49:14.125752 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:49:14.125774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-23 07:49:14.125794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-23 07:49:14.125805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-23 07:49:14.125816 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:49:14.125828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-23 07:49:14.125840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-23 07:49:14.125862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-23 07:49:14.125882 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:49:14.125911 | orchestrator | 2025-09-23 07:49:14.125930 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-09-23 07:49:14.125950 | orchestrator | Tuesday 23 September 2025 07:46:47 +0000 (0:00:00.801) 0:00:09.924 ***** 2025-09-23 07:49:14.125982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-23 07:49:14.125996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-23 07:49:14.126008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-23 07:49:14.126069 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:49:14.126084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-23 07:49:14.126102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-23 07:49:14.126133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-23 07:49:14.126145 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:49:14.126156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-23 07:49:14.126168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-23 07:49:14.126198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-23 07:49:14.126209 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:49:14.126220 | orchestrator | 2025-09-23 07:49:14.126231 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-09-23 07:49:14.126242 | orchestrator | Tuesday 23 September 2025 07:46:48 +0000 (0:00:00.779) 0:00:10.704 ***** 2025-09-23 07:49:14.126258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-23 07:49:14.126285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-23 07:49:14.126298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-23 07:49:14.126310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-23 07:49:14.126322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-23 07:49:14.126333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-23 07:49:14.126356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-23 07:49:14.126375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-23 07:49:14.126387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-23 07:49:14.126398 | orchestrator | 2025-09-23 07:49:14.126409 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-09-23 07:49:14.126420 | orchestrator | Tuesday 23 September 2025 07:46:51 +0000 (0:00:03.227) 0:00:13.932 ***** 2025-09-23 07:49:14.126432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-23 07:49:14.126444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-23 07:49:14.126468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-23 07:49:14.126488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-23 07:49:14.126501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-23 07:49:14.126513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-23 07:49:14.126524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-23 07:49:14.126543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-23 07:49:14.126559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-23 07:49:14.126570 | orchestrator | 2025-09-23 07:49:14.126581 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-09-23 07:49:14.126592 | orchestrator | Tuesday 23 September 2025 07:46:56 +0000 (0:00:04.634) 0:00:18.566 ***** 2025-09-23 07:49:14.126603 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:49:14.126621 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:49:14.126632 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:49:14.126643 | orchestrator | 2025-09-23 07:49:14.126654 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-09-23 07:49:14.126665 | orchestrator | Tuesday 23 September 2025 07:46:57 +0000 (0:00:01.436) 0:00:20.002 ***** 2025-09-23 07:49:14.126676 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:49:14.126686 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:49:14.126697 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:49:14.126708 | orchestrator | 2025-09-23 07:49:14.126718 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-09-23 07:49:14.126729 | orchestrator | Tuesday 23 September 2025 07:46:58 +0000 (0:00:00.558) 0:00:20.561 ***** 2025-09-23 07:49:14.126740 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:49:14.126751 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:49:14.126761 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:49:14.126772 | orchestrator | 2025-09-23 07:49:14.126782 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-09-23 07:49:14.126793 | orchestrator | Tuesday 23 September 2025 07:46:58 +0000 (0:00:00.302) 0:00:20.864 ***** 2025-09-23 07:49:14.126804 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:49:14.126815 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:49:14.126826 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:49:14.126837 | orchestrator | 2025-09-23 07:49:14.126848 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-09-23 07:49:14.126858 | orchestrator | Tuesday 23 September 2025 07:46:58 +0000 (0:00:00.535) 0:00:21.399 ***** 2025-09-23 07:49:14.126870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-23 07:49:14.126889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-23 07:49:14.126906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-23 07:49:14.126925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-23 07:49:14.126938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-23 07:49:14.126950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-23 07:49:14.126968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-23 07:49:14.126979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-23 07:49:14.127000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-23 07:49:14.127011 | orchestrator | 2025-09-23 07:49:14.127022 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-23 07:49:14.127033 | orchestrator | Tuesday 23 September 2025 07:47:01 +0000 (0:00:02.377) 0:00:23.776 ***** 2025-09-23 07:49:14.127043 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:49:14.127054 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:49:14.127065 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:49:14.127076 | orchestrator | 2025-09-23 07:49:14.127087 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-09-23 07:49:14.127098 | orchestrator | Tuesday 23 September 2025 07:47:01 +0000 (0:00:00.299) 0:00:24.076 ***** 2025-09-23 07:49:14.127115 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-23 07:49:14.127127 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-23 07:49:14.127138 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-23 07:49:14.127149 | orchestrator | 2025-09-23 07:49:14.127160 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-09-23 07:49:14.127171 | orchestrator | Tuesday 23 September 2025 07:47:03 +0000 (0:00:01.501) 0:00:25.578 ***** 2025-09-23 07:49:14.127213 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-23 07:49:14.127224 | orchestrator | 2025-09-23 07:49:14.127235 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-09-23 07:49:14.127246 | orchestrator | Tuesday 23 September 2025 07:47:03 +0000 (0:00:00.794) 0:00:26.372 ***** 2025-09-23 07:49:14.127257 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:49:14.127268 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:49:14.127279 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:49:14.127297 | orchestrator | 2025-09-23 07:49:14.127308 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-09-23 07:49:14.127319 | orchestrator | Tuesday 23 September 2025 07:47:04 +0000 (0:00:00.650) 0:00:27.023 ***** 2025-09-23 07:49:14.127330 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-23 07:49:14.127340 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-23 07:49:14.127351 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-23 07:49:14.127362 | orchestrator | 2025-09-23 07:49:14.127373 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-09-23 07:49:14.127383 | orchestrator | Tuesday 23 September 2025 07:47:05 +0000 (0:00:00.919) 0:00:27.942 ***** 2025-09-23 07:49:14.127394 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:49:14.127404 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:49:14.127415 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:49:14.127426 | orchestrator | 2025-09-23 07:49:14.127436 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-09-23 07:49:14.127447 | orchestrator | Tuesday 23 September 2025 07:47:05 +0000 (0:00:00.304) 0:00:28.247 ***** 2025-09-23 07:49:14.127458 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-23 07:49:14.127469 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-23 07:49:14.127479 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-23 07:49:14.127490 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-23 07:49:14.127501 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-23 07:49:14.127512 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-23 07:49:14.127523 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-23 07:49:14.127534 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-23 07:49:14.127545 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-23 07:49:14.127555 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-23 07:49:14.127566 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-23 07:49:14.127577 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-23 07:49:14.127587 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-23 07:49:14.127598 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-23 07:49:14.127609 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-23 07:49:14.127620 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-23 07:49:14.127631 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-23 07:49:14.127641 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-23 07:49:14.127652 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-23 07:49:14.127668 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-23 07:49:14.127679 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-23 07:49:14.127690 | orchestrator | 2025-09-23 07:49:14.127701 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-09-23 07:49:14.127712 | orchestrator | Tuesday 23 September 2025 07:47:14 +0000 (0:00:08.856) 0:00:37.104 ***** 2025-09-23 07:49:14.127729 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-23 07:49:14.127740 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-23 07:49:14.127751 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-23 07:49:14.127768 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-23 07:49:14.127779 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-23 07:49:14.127789 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-23 07:49:14.127800 | orchestrator | 2025-09-23 07:49:14.127811 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-09-23 07:49:14.127822 | orchestrator | Tuesday 23 September 2025 07:47:17 +0000 (0:00:02.667) 0:00:39.772 ***** 2025-09-23 07:49:14.127834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-23 07:49:14.127847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-23 07:49:14.127859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-23 07:49:14.127905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-23 07:49:14.127926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-23 07:49:14.127938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-23 07:49:14.127949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-23 07:49:14.127960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-23 07:49:14.127971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-23 07:49:14.127982 | orchestrator | 2025-09-23 07:49:14.127993 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-23 07:49:14.128004 | orchestrator | Tuesday 23 September 2025 07:47:19 +0000 (0:00:02.326) 0:00:42.099 ***** 2025-09-23 07:49:14.128022 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:49:14.128033 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:49:14.128044 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:49:14.128055 | orchestrator | 2025-09-23 07:49:14.128066 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-09-23 07:49:14.128076 | orchestrator | Tuesday 23 September 2025 07:47:19 +0000 (0:00:00.257) 0:00:42.357 ***** 2025-09-23 07:49:14.128092 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:49:14.128103 | orchestrator | 2025-09-23 07:49:14.128114 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-09-23 07:49:14.128125 | orchestrator | Tuesday 23 September 2025 07:47:22 +0000 (0:00:02.316) 0:00:44.673 ***** 2025-09-23 07:49:14.128136 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:49:14.128147 | orchestrator | 2025-09-23 07:49:14.128157 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-09-23 07:49:14.128168 | orchestrator | Tuesday 23 September 2025 07:47:24 +0000 (0:00:02.067) 0:00:46.740 ***** 2025-09-23 07:49:14.128200 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:49:14.128211 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:49:14.128222 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:49:14.128233 | orchestrator | 2025-09-23 07:49:14.128244 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-09-23 07:49:14.128261 | orchestrator | Tuesday 23 September 2025 07:47:25 +0000 (0:00:00.919) 0:00:47.660 ***** 2025-09-23 07:49:14.128272 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:49:14.128283 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:49:14.128294 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:49:14.128313 | orchestrator | 2025-09-23 07:49:14.128333 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-09-23 07:49:14.128353 | orchestrator | Tuesday 23 September 2025 07:47:25 +0000 (0:00:00.459) 0:00:48.120 ***** 2025-09-23 07:49:14.128372 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:49:14.128391 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:49:14.128411 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:49:14.128430 | orchestrator | 2025-09-23 07:49:14.128449 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-09-23 07:49:14.128469 | orchestrator | Tuesday 23 September 2025 07:47:25 +0000 (0:00:00.317) 0:00:48.438 ***** 2025-09-23 07:49:14.128488 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:49:14.128509 | orchestrator | 2025-09-23 07:49:14.128529 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-09-23 07:49:14.128548 | orchestrator | Tuesday 23 September 2025 07:47:39 +0000 (0:00:13.386) 0:01:01.824 ***** 2025-09-23 07:49:14.128560 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:49:14.128570 | orchestrator | 2025-09-23 07:49:14.128581 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-23 07:49:14.128591 | orchestrator | Tuesday 23 September 2025 07:47:49 +0000 (0:00:09.933) 0:01:11.758 ***** 2025-09-23 07:49:14.128602 | orchestrator | 2025-09-23 07:49:14.128613 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-23 07:49:14.128623 | orchestrator | Tuesday 23 September 2025 07:47:49 +0000 (0:00:00.064) 0:01:11.822 ***** 2025-09-23 07:49:14.128633 | orchestrator | 2025-09-23 07:49:14.128644 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-23 07:49:14.128655 | orchestrator | Tuesday 23 September 2025 07:47:49 +0000 (0:00:00.065) 0:01:11.888 ***** 2025-09-23 07:49:14.128665 | orchestrator | 2025-09-23 07:49:14.128676 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-09-23 07:49:14.128686 | orchestrator | Tuesday 23 September 2025 07:47:49 +0000 (0:00:00.065) 0:01:11.953 ***** 2025-09-23 07:49:14.128697 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:49:14.128708 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:49:14.128718 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:49:14.128729 | orchestrator | 2025-09-23 07:49:14.128740 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-09-23 07:49:14.128760 | orchestrator | Tuesday 23 September 2025 07:48:09 +0000 (0:00:19.798) 0:01:31.751 ***** 2025-09-23 07:49:14.128771 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:49:14.128782 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:49:14.128792 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:49:14.128803 | orchestrator | 2025-09-23 07:49:14.128813 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-09-23 07:49:14.128824 | orchestrator | Tuesday 23 September 2025 07:48:14 +0000 (0:00:05.068) 0:01:36.820 ***** 2025-09-23 07:49:14.128835 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:49:14.128845 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:49:14.128856 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:49:14.128866 | orchestrator | 2025-09-23 07:49:14.128877 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-23 07:49:14.128887 | orchestrator | Tuesday 23 September 2025 07:48:26 +0000 (0:00:11.922) 0:01:48.742 ***** 2025-09-23 07:49:14.128898 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:49:14.128909 | orchestrator | 2025-09-23 07:49:14.128919 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-09-23 07:49:14.128930 | orchestrator | Tuesday 23 September 2025 07:48:26 +0000 (0:00:00.700) 0:01:49.443 ***** 2025-09-23 07:49:14.128941 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:49:14.128951 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:49:14.128962 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:49:14.128973 | orchestrator | 2025-09-23 07:49:14.128983 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-09-23 07:49:14.128994 | orchestrator | Tuesday 23 September 2025 07:48:27 +0000 (0:00:00.705) 0:01:50.148 ***** 2025-09-23 07:49:14.129005 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:49:14.129015 | orchestrator | 2025-09-23 07:49:14.129026 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-09-23 07:49:14.129036 | orchestrator | Tuesday 23 September 2025 07:48:29 +0000 (0:00:01.688) 0:01:51.837 ***** 2025-09-23 07:49:14.129047 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-09-23 07:49:14.129058 | orchestrator | 2025-09-23 07:49:14.129069 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-09-23 07:49:14.129079 | orchestrator | Tuesday 23 September 2025 07:48:39 +0000 (0:00:10.042) 0:02:01.879 ***** 2025-09-23 07:49:14.129090 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-09-23 07:49:14.129101 | orchestrator | 2025-09-23 07:49:14.129111 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-09-23 07:49:14.129122 | orchestrator | Tuesday 23 September 2025 07:49:01 +0000 (0:00:21.834) 0:02:23.714 ***** 2025-09-23 07:49:14.129138 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-09-23 07:49:14.129150 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-09-23 07:49:14.129160 | orchestrator | 2025-09-23 07:49:14.129172 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-09-23 07:49:14.129207 | orchestrator | Tuesday 23 September 2025 07:49:07 +0000 (0:00:06.530) 0:02:30.245 ***** 2025-09-23 07:49:14.129218 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:49:14.129229 | orchestrator | 2025-09-23 07:49:14.129240 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-09-23 07:49:14.129250 | orchestrator | Tuesday 23 September 2025 07:49:07 +0000 (0:00:00.124) 0:02:30.369 ***** 2025-09-23 07:49:14.129261 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:49:14.129272 | orchestrator | 2025-09-23 07:49:14.129292 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-09-23 07:49:14.129303 | orchestrator | Tuesday 23 September 2025 07:49:07 +0000 (0:00:00.122) 0:02:30.492 ***** 2025-09-23 07:49:14.129314 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:49:14.129331 | orchestrator | 2025-09-23 07:49:14.129341 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-09-23 07:49:14.129352 | orchestrator | Tuesday 23 September 2025 07:49:08 +0000 (0:00:00.176) 0:02:30.668 ***** 2025-09-23 07:49:14.129362 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:49:14.129373 | orchestrator | 2025-09-23 07:49:14.129384 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-09-23 07:49:14.129394 | orchestrator | Tuesday 23 September 2025 07:49:08 +0000 (0:00:00.575) 0:02:31.244 ***** 2025-09-23 07:49:14.129405 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:49:14.129415 | orchestrator | 2025-09-23 07:49:14.129426 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-23 07:49:14.129437 | orchestrator | Tuesday 23 September 2025 07:49:11 +0000 (0:00:02.876) 0:02:34.120 ***** 2025-09-23 07:49:14.129447 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:49:14.129457 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:49:14.129468 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:49:14.129479 | orchestrator | 2025-09-23 07:49:14.129489 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:49:14.129500 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-09-23 07:49:14.129511 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-23 07:49:14.129522 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-23 07:49:14.129533 | orchestrator | 2025-09-23 07:49:14.129543 | orchestrator | 2025-09-23 07:49:14.129554 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:49:14.129564 | orchestrator | Tuesday 23 September 2025 07:49:12 +0000 (0:00:00.563) 0:02:34.684 ***** 2025-09-23 07:49:14.129575 | orchestrator | =============================================================================== 2025-09-23 07:49:14.129585 | orchestrator | service-ks-register : keystone | Creating services --------------------- 21.83s 2025-09-23 07:49:14.129596 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 19.80s 2025-09-23 07:49:14.129607 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.39s 2025-09-23 07:49:14.129617 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.92s 2025-09-23 07:49:14.129628 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.04s 2025-09-23 07:49:14.129638 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.93s 2025-09-23 07:49:14.129649 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.86s 2025-09-23 07:49:14.129659 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.53s 2025-09-23 07:49:14.129670 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 5.07s 2025-09-23 07:49:14.129680 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.63s 2025-09-23 07:49:14.129691 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.41s 2025-09-23 07:49:14.129702 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.23s 2025-09-23 07:49:14.129712 | orchestrator | keystone : Creating default user role ----------------------------------- 2.88s 2025-09-23 07:49:14.129723 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.67s 2025-09-23 07:49:14.129733 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.38s 2025-09-23 07:49:14.129744 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.33s 2025-09-23 07:49:14.129754 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.32s 2025-09-23 07:49:14.129765 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.07s 2025-09-23 07:49:14.129782 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.70s 2025-09-23 07:49:14.129793 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.69s 2025-09-23 07:49:14.129803 | orchestrator | 2025-09-23 07:49:14 | INFO  | Task bd199535-5296-48dd-b502-a7cbb350f1a0 is in state STARTED 2025-09-23 07:49:14.129820 | orchestrator | 2025-09-23 07:49:14 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:49:14.129831 | orchestrator | 2025-09-23 07:49:14 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:49:14.129842 | orchestrator | 2025-09-23 07:49:14 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:49:14.129852 | orchestrator | 2025-09-23 07:49:14 | INFO  | Task 40a8ec7f-4d5a-4bf9-bc0f-1586d9477e35 is in state SUCCESS 2025-09-23 07:49:14.129863 | orchestrator | 2025-09-23 07:49:14 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:49:17.160397 | orchestrator | 2025-09-23 07:49:17 | INFO  | Task bd199535-5296-48dd-b502-a7cbb350f1a0 is in state STARTED 2025-09-23 07:49:17.160732 | orchestrator | 2025-09-23 07:49:17 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:49:17.161779 | orchestrator | 2025-09-23 07:49:17 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:49:17.162587 | orchestrator | 2025-09-23 07:49:17 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:49:17.163702 | orchestrator | 2025-09-23 07:49:17 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:49:17.163729 | orchestrator | 2025-09-23 07:49:17 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:49:20.225511 | orchestrator | 2025-09-23 07:49:20 | INFO  | Task bd199535-5296-48dd-b502-a7cbb350f1a0 is in state STARTED 2025-09-23 07:49:20.228520 | orchestrator | 2025-09-23 07:49:20 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:49:20.231233 | orchestrator | 2025-09-23 07:49:20 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:49:20.232126 | orchestrator | 2025-09-23 07:49:20 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:49:20.232965 | orchestrator | 2025-09-23 07:49:20 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:49:20.232995 | orchestrator | 2025-09-23 07:49:20 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:49:23.277911 | orchestrator | 2025-09-23 07:49:23 | INFO  | Task bd199535-5296-48dd-b502-a7cbb350f1a0 is in state STARTED 2025-09-23 07:49:23.277990 | orchestrator | 2025-09-23 07:49:23 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:49:23.278560 | orchestrator | 2025-09-23 07:49:23 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:49:23.279080 | orchestrator | 2025-09-23 07:49:23 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:49:23.279744 | orchestrator | 2025-09-23 07:49:23 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:49:23.279765 | orchestrator | 2025-09-23 07:49:23 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:49:26.310809 | orchestrator | 2025-09-23 07:49:26 | INFO  | Task bd199535-5296-48dd-b502-a7cbb350f1a0 is in state STARTED 2025-09-23 07:49:26.311004 | orchestrator | 2025-09-23 07:49:26 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:49:26.311535 | orchestrator | 2025-09-23 07:49:26 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:49:26.312473 | orchestrator | 2025-09-23 07:49:26 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:49:26.314104 | orchestrator | 2025-09-23 07:49:26 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:49:26.314245 | orchestrator | 2025-09-23 07:49:26 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:49:29.356633 | orchestrator | 2025-09-23 07:49:29 | INFO  | Task bd199535-5296-48dd-b502-a7cbb350f1a0 is in state STARTED 2025-09-23 07:49:29.359693 | orchestrator | 2025-09-23 07:49:29 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:49:29.361845 | orchestrator | 2025-09-23 07:49:29 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:49:29.364110 | orchestrator | 2025-09-23 07:49:29 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:49:29.366090 | orchestrator | 2025-09-23 07:49:29 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:49:29.366276 | orchestrator | 2025-09-23 07:49:29 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:49:32.422967 | orchestrator | 2025-09-23 07:49:32 | INFO  | Task bd199535-5296-48dd-b502-a7cbb350f1a0 is in state STARTED 2025-09-23 07:49:32.423347 | orchestrator | 2025-09-23 07:49:32 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:49:32.424143 | orchestrator | 2025-09-23 07:49:32 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:49:32.424864 | orchestrator | 2025-09-23 07:49:32 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:49:32.425812 | orchestrator | 2025-09-23 07:49:32 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:49:32.425862 | orchestrator | 2025-09-23 07:49:32 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:49:35.466439 | orchestrator | 2025-09-23 07:49:35 | INFO  | Task bd199535-5296-48dd-b502-a7cbb350f1a0 is in state STARTED 2025-09-23 07:49:35.466958 | orchestrator | 2025-09-23 07:49:35 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:49:35.471122 | orchestrator | 2025-09-23 07:49:35 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:49:35.474939 | orchestrator | 2025-09-23 07:49:35 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:49:35.477400 | orchestrator | 2025-09-23 07:49:35 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:49:35.477593 | orchestrator | 2025-09-23 07:49:35 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:49:38.517483 | orchestrator | 2025-09-23 07:49:38 | INFO  | Task bd199535-5296-48dd-b502-a7cbb350f1a0 is in state STARTED 2025-09-23 07:49:38.519766 | orchestrator | 2025-09-23 07:49:38 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:49:38.522296 | orchestrator | 2025-09-23 07:49:38 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:49:38.524818 | orchestrator | 2025-09-23 07:49:38 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:49:38.526518 | orchestrator | 2025-09-23 07:49:38 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:49:38.526823 | orchestrator | 2025-09-23 07:49:38 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:49:41.600463 | orchestrator | 2025-09-23 07:49:41 | INFO  | Task bd199535-5296-48dd-b502-a7cbb350f1a0 is in state STARTED 2025-09-23 07:49:41.600728 | orchestrator | 2025-09-23 07:49:41 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:49:41.600756 | orchestrator | 2025-09-23 07:49:41 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:49:41.600774 | orchestrator | 2025-09-23 07:49:41 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:49:41.600914 | orchestrator | 2025-09-23 07:49:41 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:49:41.600941 | orchestrator | 2025-09-23 07:49:41 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:49:44.695385 | orchestrator | 2025-09-23 07:49:44 | INFO  | Task bd199535-5296-48dd-b502-a7cbb350f1a0 is in state STARTED 2025-09-23 07:49:44.698541 | orchestrator | 2025-09-23 07:49:44 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:49:44.701907 | orchestrator | 2025-09-23 07:49:44 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:49:44.703820 | orchestrator | 2025-09-23 07:49:44 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:49:44.708358 | orchestrator | 2025-09-23 07:49:44 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:49:44.709046 | orchestrator | 2025-09-23 07:49:44 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:49:47.736224 | orchestrator | 2025-09-23 07:49:47 | INFO  | Task bd199535-5296-48dd-b502-a7cbb350f1a0 is in state STARTED 2025-09-23 07:49:47.736603 | orchestrator | 2025-09-23 07:49:47 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:49:47.736941 | orchestrator | 2025-09-23 07:49:47 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:49:47.737731 | orchestrator | 2025-09-23 07:49:47 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:49:47.738241 | orchestrator | 2025-09-23 07:49:47 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:49:47.738267 | orchestrator | 2025-09-23 07:49:47 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:49:50.763331 | orchestrator | 2025-09-23 07:49:50 | INFO  | Task bd199535-5296-48dd-b502-a7cbb350f1a0 is in state STARTED 2025-09-23 07:49:50.765538 | orchestrator | 2025-09-23 07:49:50 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:49:50.767367 | orchestrator | 2025-09-23 07:49:50 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:49:50.770332 | orchestrator | 2025-09-23 07:49:50 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:49:50.772626 | orchestrator | 2025-09-23 07:49:50 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:49:50.772653 | orchestrator | 2025-09-23 07:49:50 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:49:53.814812 | orchestrator | 2025-09-23 07:49:53 | INFO  | Task bd199535-5296-48dd-b502-a7cbb350f1a0 is in state STARTED 2025-09-23 07:49:53.814899 | orchestrator | 2025-09-23 07:49:53 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:49:53.815505 | orchestrator | 2025-09-23 07:49:53 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:49:53.817114 | orchestrator | 2025-09-23 07:49:53 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:49:53.817698 | orchestrator | 2025-09-23 07:49:53 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:49:53.817760 | orchestrator | 2025-09-23 07:49:53 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:49:56.861697 | orchestrator | 2025-09-23 07:49:56 | INFO  | Task bd199535-5296-48dd-b502-a7cbb350f1a0 is in state STARTED 2025-09-23 07:49:56.865376 | orchestrator | 2025-09-23 07:49:56 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:49:56.865462 | orchestrator | 2025-09-23 07:49:56 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:49:56.866274 | orchestrator | 2025-09-23 07:49:56 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:49:56.867241 | orchestrator | 2025-09-23 07:49:56 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:49:56.867266 | orchestrator | 2025-09-23 07:49:56 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:49:59.893946 | orchestrator | 2025-09-23 07:49:59 | INFO  | Task bd199535-5296-48dd-b502-a7cbb350f1a0 is in state STARTED 2025-09-23 07:49:59.895390 | orchestrator | 2025-09-23 07:49:59 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:49:59.896292 | orchestrator | 2025-09-23 07:49:59 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:49:59.898066 | orchestrator | 2025-09-23 07:49:59 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:49:59.898602 | orchestrator | 2025-09-23 07:49:59 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:49:59.898627 | orchestrator | 2025-09-23 07:49:59 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:50:02.927709 | orchestrator | 2025-09-23 07:50:02 | INFO  | Task bd199535-5296-48dd-b502-a7cbb350f1a0 is in state STARTED 2025-09-23 07:50:02.927988 | orchestrator | 2025-09-23 07:50:02 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:50:02.928624 | orchestrator | 2025-09-23 07:50:02 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:50:02.929321 | orchestrator | 2025-09-23 07:50:02 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:50:02.930076 | orchestrator | 2025-09-23 07:50:02 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:50:02.930143 | orchestrator | 2025-09-23 07:50:02 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:50:05.956479 | orchestrator | 2025-09-23 07:50:05 | INFO  | Task bd199535-5296-48dd-b502-a7cbb350f1a0 is in state STARTED 2025-09-23 07:50:05.956586 | orchestrator | 2025-09-23 07:50:05 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:50:05.956824 | orchestrator | 2025-09-23 07:50:05 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:50:05.966996 | orchestrator | 2025-09-23 07:50:05 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:50:05.967080 | orchestrator | 2025-09-23 07:50:05 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:50:05.967110 | orchestrator | 2025-09-23 07:50:05 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:50:08.987511 | orchestrator | 2025-09-23 07:50:08 | INFO  | Task bd199535-5296-48dd-b502-a7cbb350f1a0 is in state STARTED 2025-09-23 07:50:08.987623 | orchestrator | 2025-09-23 07:50:08 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:50:08.987694 | orchestrator | 2025-09-23 07:50:08 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:50:08.988288 | orchestrator | 2025-09-23 07:50:08 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:50:08.988759 | orchestrator | 2025-09-23 07:50:08 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:50:08.988773 | orchestrator | 2025-09-23 07:50:08 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:50:12.011894 | orchestrator | 2025-09-23 07:50:12 | INFO  | Task bd199535-5296-48dd-b502-a7cbb350f1a0 is in state STARTED 2025-09-23 07:50:12.012105 | orchestrator | 2025-09-23 07:50:12 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:50:12.013234 | orchestrator | 2025-09-23 07:50:12 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:50:12.013850 | orchestrator | 2025-09-23 07:50:12 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:50:12.014375 | orchestrator | 2025-09-23 07:50:12 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:50:12.014393 | orchestrator | 2025-09-23 07:50:12 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:50:15.040253 | orchestrator | 2025-09-23 07:50:15 | INFO  | Task bd199535-5296-48dd-b502-a7cbb350f1a0 is in state STARTED 2025-09-23 07:50:15.040418 | orchestrator | 2025-09-23 07:50:15 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:50:15.041042 | orchestrator | 2025-09-23 07:50:15 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:50:15.041786 | orchestrator | 2025-09-23 07:50:15 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:50:15.042336 | orchestrator | 2025-09-23 07:50:15 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:50:15.042369 | orchestrator | 2025-09-23 07:50:15 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:50:18.090681 | orchestrator | 2025-09-23 07:50:18 | INFO  | Task bd199535-5296-48dd-b502-a7cbb350f1a0 is in state STARTED 2025-09-23 07:50:18.090776 | orchestrator | 2025-09-23 07:50:18 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:50:18.091459 | orchestrator | 2025-09-23 07:50:18 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:50:18.092223 | orchestrator | 2025-09-23 07:50:18 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:50:18.093206 | orchestrator | 2025-09-23 07:50:18 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:50:18.093243 | orchestrator | 2025-09-23 07:50:18 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:50:21.120627 | orchestrator | 2025-09-23 07:50:21 | INFO  | Task bd199535-5296-48dd-b502-a7cbb350f1a0 is in state STARTED 2025-09-23 07:50:21.121920 | orchestrator | 2025-09-23 07:50:21 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:50:21.122532 | orchestrator | 2025-09-23 07:50:21 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:50:21.123163 | orchestrator | 2025-09-23 07:50:21 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:50:21.124848 | orchestrator | 2025-09-23 07:50:21 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:50:21.124873 | orchestrator | 2025-09-23 07:50:21 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:50:24.148782 | orchestrator | 2025-09-23 07:50:24 | INFO  | Task bd199535-5296-48dd-b502-a7cbb350f1a0 is in state STARTED 2025-09-23 07:50:24.148991 | orchestrator | 2025-09-23 07:50:24 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:50:24.149450 | orchestrator | 2025-09-23 07:50:24 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:50:24.150002 | orchestrator | 2025-09-23 07:50:24 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:50:24.150456 | orchestrator | 2025-09-23 07:50:24 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:50:24.150480 | orchestrator | 2025-09-23 07:50:24 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:50:27.184349 | orchestrator | 2025-09-23 07:50:27 | INFO  | Task bd199535-5296-48dd-b502-a7cbb350f1a0 is in state STARTED 2025-09-23 07:50:27.184916 | orchestrator | 2025-09-23 07:50:27 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:50:27.185461 | orchestrator | 2025-09-23 07:50:27 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:50:27.186070 | orchestrator | 2025-09-23 07:50:27 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:50:27.186771 | orchestrator | 2025-09-23 07:50:27 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:50:27.186799 | orchestrator | 2025-09-23 07:50:27 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:50:30.210163 | orchestrator | 2025-09-23 07:50:30 | INFO  | Task bd199535-5296-48dd-b502-a7cbb350f1a0 is in state STARTED 2025-09-23 07:50:30.210478 | orchestrator | 2025-09-23 07:50:30 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:50:30.211542 | orchestrator | 2025-09-23 07:50:30 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:50:30.212310 | orchestrator | 2025-09-23 07:50:30 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:50:30.215506 | orchestrator | 2025-09-23 07:50:30 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:50:30.215561 | orchestrator | 2025-09-23 07:50:30 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:50:33.238935 | orchestrator | 2025-09-23 07:50:33 | INFO  | Task bd199535-5296-48dd-b502-a7cbb350f1a0 is in state STARTED 2025-09-23 07:50:33.239330 | orchestrator | 2025-09-23 07:50:33 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:50:33.239930 | orchestrator | 2025-09-23 07:50:33 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:50:33.240501 | orchestrator | 2025-09-23 07:50:33 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:50:33.241179 | orchestrator | 2025-09-23 07:50:33 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:50:33.241213 | orchestrator | 2025-09-23 07:50:33 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:50:36.263939 | orchestrator | 2025-09-23 07:50:36 | INFO  | Task bd199535-5296-48dd-b502-a7cbb350f1a0 is in state STARTED 2025-09-23 07:50:36.264378 | orchestrator | 2025-09-23 07:50:36 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:50:36.264757 | orchestrator | 2025-09-23 07:50:36 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:50:36.265470 | orchestrator | 2025-09-23 07:50:36 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:50:36.266350 | orchestrator | 2025-09-23 07:50:36 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:50:36.266379 | orchestrator | 2025-09-23 07:50:36 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:50:39.286451 | orchestrator | 2025-09-23 07:50:39 | INFO  | Task bd199535-5296-48dd-b502-a7cbb350f1a0 is in state SUCCESS 2025-09-23 07:50:39.286611 | orchestrator | 2025-09-23 07:50:39 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:50:39.287135 | orchestrator | 2025-09-23 07:50:39 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:50:39.287736 | orchestrator | 2025-09-23 07:50:39 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:50:39.288084 | orchestrator | 2025-09-23 07:50:39 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:50:39.288134 | orchestrator | 2025-09-23 07:50:39 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:50:42.316728 | orchestrator | 2025-09-23 07:50:42 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:50:42.318375 | orchestrator | 2025-09-23 07:50:42 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:50:42.318858 | orchestrator | 2025-09-23 07:50:42 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:50:42.319587 | orchestrator | 2025-09-23 07:50:42 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:50:42.319719 | orchestrator | 2025-09-23 07:50:42 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:50:45.360229 | orchestrator | 2025-09-23 07:50:45 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:50:45.360539 | orchestrator | 2025-09-23 07:50:45 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:50:45.361241 | orchestrator | 2025-09-23 07:50:45 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:50:45.362182 | orchestrator | 2025-09-23 07:50:45 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:50:45.362258 | orchestrator | 2025-09-23 07:50:45 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:50:48.390119 | orchestrator | 2025-09-23 07:50:48 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:50:48.390488 | orchestrator | 2025-09-23 07:50:48 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:50:48.392753 | orchestrator | 2025-09-23 07:50:48 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:50:48.393461 | orchestrator | 2025-09-23 07:50:48 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:50:48.393562 | orchestrator | 2025-09-23 07:50:48 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:50:51.417280 | orchestrator | 2025-09-23 07:50:51 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:50:51.417360 | orchestrator | 2025-09-23 07:50:51 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:50:51.418063 | orchestrator | 2025-09-23 07:50:51 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:50:51.418792 | orchestrator | 2025-09-23 07:50:51 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:50:51.418812 | orchestrator | 2025-09-23 07:50:51 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:50:54.449910 | orchestrator | 2025-09-23 07:50:54 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:50:54.450005 | orchestrator | 2025-09-23 07:50:54 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:50:54.450227 | orchestrator | 2025-09-23 07:50:54 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:50:54.450638 | orchestrator | 2025-09-23 07:50:54 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:50:54.450664 | orchestrator | 2025-09-23 07:50:54 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:50:57.478631 | orchestrator | 2025-09-23 07:50:57 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:50:57.481530 | orchestrator | 2025-09-23 07:50:57 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:50:57.483132 | orchestrator | 2025-09-23 07:50:57 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:50:57.484473 | orchestrator | 2025-09-23 07:50:57 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:50:57.484515 | orchestrator | 2025-09-23 07:50:57 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:51:00.508457 | orchestrator | 2025-09-23 07:51:00 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:51:00.508536 | orchestrator | 2025-09-23 07:51:00 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:51:00.509006 | orchestrator | 2025-09-23 07:51:00 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:51:00.511127 | orchestrator | 2025-09-23 07:51:00 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:51:00.511151 | orchestrator | 2025-09-23 07:51:00 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:51:03.535264 | orchestrator | 2025-09-23 07:51:03 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:51:03.535349 | orchestrator | 2025-09-23 07:51:03 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:51:03.537046 | orchestrator | 2025-09-23 07:51:03 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:51:03.537874 | orchestrator | 2025-09-23 07:51:03 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:51:03.537941 | orchestrator | 2025-09-23 07:51:03 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:51:06.566563 | orchestrator | 2025-09-23 07:51:06 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:51:06.566635 | orchestrator | 2025-09-23 07:51:06 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:51:06.566701 | orchestrator | 2025-09-23 07:51:06 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:51:06.567409 | orchestrator | 2025-09-23 07:51:06 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:51:06.567434 | orchestrator | 2025-09-23 07:51:06 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:51:09.597633 | orchestrator | 2025-09-23 07:51:09 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:51:09.598126 | orchestrator | 2025-09-23 07:51:09 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:51:09.599043 | orchestrator | 2025-09-23 07:51:09 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:51:09.600400 | orchestrator | 2025-09-23 07:51:09 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:51:09.600426 | orchestrator | 2025-09-23 07:51:09 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:51:12.637747 | orchestrator | 2025-09-23 07:51:12 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:51:12.638601 | orchestrator | 2025-09-23 07:51:12 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:51:12.639057 | orchestrator | 2025-09-23 07:51:12 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:51:12.639797 | orchestrator | 2025-09-23 07:51:12 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:51:12.639833 | orchestrator | 2025-09-23 07:51:12 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:51:15.679424 | orchestrator | 2025-09-23 07:51:15 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:51:15.681482 | orchestrator | 2025-09-23 07:51:15 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:51:15.682130 | orchestrator | 2025-09-23 07:51:15 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:51:15.684186 | orchestrator | 2025-09-23 07:51:15 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:51:15.684224 | orchestrator | 2025-09-23 07:51:15 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:51:18.728210 | orchestrator | 2025-09-23 07:51:18 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:51:18.729479 | orchestrator | 2025-09-23 07:51:18 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:51:18.730233 | orchestrator | 2025-09-23 07:51:18 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:51:18.731116 | orchestrator | 2025-09-23 07:51:18 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state STARTED 2025-09-23 07:51:18.731348 | orchestrator | 2025-09-23 07:51:18 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:51:21.753850 | orchestrator | 2025-09-23 07:51:21 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:51:21.755005 | orchestrator | 2025-09-23 07:51:21 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:51:21.755584 | orchestrator | 2025-09-23 07:51:21 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:51:21.757260 | orchestrator | 2025-09-23 07:51:21 | INFO  | Task 5a495c36-b277-47ea-9288-2e3f9dc95842 is in state SUCCESS 2025-09-23 07:51:21.757528 | orchestrator | 2025-09-23 07:51:21.757578 | orchestrator | 2025-09-23 07:51:21.757591 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-23 07:51:21.757602 | orchestrator | 2025-09-23 07:51:21.757614 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-23 07:51:21.757625 | orchestrator | Tuesday 23 September 2025 07:49:11 +0000 (0:00:00.192) 0:00:00.192 ***** 2025-09-23 07:51:21.757636 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:51:21.757648 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:51:21.757659 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:51:21.757784 | orchestrator | 2025-09-23 07:51:21.757800 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-23 07:51:21.757811 | orchestrator | Tuesday 23 September 2025 07:49:11 +0000 (0:00:00.319) 0:00:00.512 ***** 2025-09-23 07:51:21.757822 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-23 07:51:21.757834 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-23 07:51:21.757845 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-23 07:51:21.757855 | orchestrator | 2025-09-23 07:51:21.757879 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-09-23 07:51:21.757890 | orchestrator | 2025-09-23 07:51:21.757901 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-09-23 07:51:21.757932 | orchestrator | Tuesday 23 September 2025 07:49:12 +0000 (0:00:00.936) 0:00:01.448 ***** 2025-09-23 07:51:21.757944 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:51:21.757955 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:51:21.757965 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:51:21.757976 | orchestrator | 2025-09-23 07:51:21.757986 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:51:21.757998 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:51:21.758010 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:51:21.758218 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:51:21.758396 | orchestrator | 2025-09-23 07:51:21.758415 | orchestrator | 2025-09-23 07:51:21.758428 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:51:21.758442 | orchestrator | Tuesday 23 September 2025 07:49:13 +0000 (0:00:01.023) 0:00:02.472 ***** 2025-09-23 07:51:21.758453 | orchestrator | =============================================================================== 2025-09-23 07:51:21.758464 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 1.02s 2025-09-23 07:51:21.758474 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.94s 2025-09-23 07:51:21.758485 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2025-09-23 07:51:21.758496 | orchestrator | 2025-09-23 07:51:21.758506 | orchestrator | 2025-09-23 07:51:21.758518 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-09-23 07:51:21.758529 | orchestrator | 2025-09-23 07:51:21.758539 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-09-23 07:51:21.758580 | orchestrator | Tuesday 23 September 2025 07:49:11 +0000 (0:00:00.287) 0:00:00.287 ***** 2025-09-23 07:51:21.758593 | orchestrator | changed: [testbed-manager] 2025-09-23 07:51:21.758605 | orchestrator | 2025-09-23 07:51:21.758616 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-09-23 07:51:21.758627 | orchestrator | Tuesday 23 September 2025 07:49:12 +0000 (0:00:01.822) 0:00:02.110 ***** 2025-09-23 07:51:21.758638 | orchestrator | changed: [testbed-manager] 2025-09-23 07:51:21.758649 | orchestrator | 2025-09-23 07:51:21.758700 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-09-23 07:51:21.758712 | orchestrator | Tuesday 23 September 2025 07:49:14 +0000 (0:00:01.397) 0:00:03.507 ***** 2025-09-23 07:51:21.758723 | orchestrator | changed: [testbed-manager] 2025-09-23 07:51:21.758733 | orchestrator | 2025-09-23 07:51:21.758744 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-09-23 07:51:21.758755 | orchestrator | Tuesday 23 September 2025 07:49:15 +0000 (0:00:01.177) 0:00:04.685 ***** 2025-09-23 07:51:21.758766 | orchestrator | changed: [testbed-manager] 2025-09-23 07:51:21.758777 | orchestrator | 2025-09-23 07:51:21.758788 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-09-23 07:51:21.758798 | orchestrator | Tuesday 23 September 2025 07:49:16 +0000 (0:00:01.334) 0:00:06.020 ***** 2025-09-23 07:51:21.758809 | orchestrator | changed: [testbed-manager] 2025-09-23 07:51:21.758820 | orchestrator | 2025-09-23 07:51:21.758856 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-09-23 07:51:21.758869 | orchestrator | Tuesday 23 September 2025 07:49:18 +0000 (0:00:01.166) 0:00:07.187 ***** 2025-09-23 07:51:21.758881 | orchestrator | changed: [testbed-manager] 2025-09-23 07:51:21.758891 | orchestrator | 2025-09-23 07:51:21.758902 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-09-23 07:51:21.758913 | orchestrator | Tuesday 23 September 2025 07:49:19 +0000 (0:00:01.346) 0:00:08.533 ***** 2025-09-23 07:51:21.758923 | orchestrator | changed: [testbed-manager] 2025-09-23 07:51:21.758934 | orchestrator | 2025-09-23 07:51:21.758957 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-09-23 07:51:21.758968 | orchestrator | Tuesday 23 September 2025 07:49:21 +0000 (0:00:01.944) 0:00:10.477 ***** 2025-09-23 07:51:21.758978 | orchestrator | changed: [testbed-manager] 2025-09-23 07:51:21.758990 | orchestrator | 2025-09-23 07:51:21.759001 | orchestrator | TASK [Create admin user] ******************************************************* 2025-09-23 07:51:21.759012 | orchestrator | Tuesday 23 September 2025 07:49:22 +0000 (0:00:00.949) 0:00:11.426 ***** 2025-09-23 07:51:21.759022 | orchestrator | changed: [testbed-manager] 2025-09-23 07:51:21.759033 | orchestrator | 2025-09-23 07:51:21.759044 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-09-23 07:51:21.759077 | orchestrator | Tuesday 23 September 2025 07:50:14 +0000 (0:00:51.995) 0:01:03.422 ***** 2025-09-23 07:51:21.759104 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:51:21.759115 | orchestrator | 2025-09-23 07:51:21.759126 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-23 07:51:21.759136 | orchestrator | 2025-09-23 07:51:21.759148 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-23 07:51:21.759159 | orchestrator | Tuesday 23 September 2025 07:50:14 +0000 (0:00:00.176) 0:01:03.598 ***** 2025-09-23 07:51:21.759169 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:51:21.759180 | orchestrator | 2025-09-23 07:51:21.759191 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-23 07:51:21.759202 | orchestrator | 2025-09-23 07:51:21.759219 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-23 07:51:21.759238 | orchestrator | Tuesday 23 September 2025 07:50:26 +0000 (0:00:11.702) 0:01:15.301 ***** 2025-09-23 07:51:21.759256 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:51:21.759275 | orchestrator | 2025-09-23 07:51:21.759293 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-23 07:51:21.759311 | orchestrator | 2025-09-23 07:51:21.759340 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-23 07:51:21.759361 | orchestrator | Tuesday 23 September 2025 07:50:37 +0000 (0:00:11.314) 0:01:26.615 ***** 2025-09-23 07:51:21.759379 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:51:21.759398 | orchestrator | 2025-09-23 07:51:21.759419 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:51:21.759438 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-23 07:51:21.759455 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:51:21.759467 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:51:21.759478 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:51:21.759488 | orchestrator | 2025-09-23 07:51:21.759499 | orchestrator | 2025-09-23 07:51:21.759510 | orchestrator | 2025-09-23 07:51:21.759521 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:51:21.759531 | orchestrator | Tuesday 23 September 2025 07:50:38 +0000 (0:00:01.132) 0:01:27.748 ***** 2025-09-23 07:51:21.759542 | orchestrator | =============================================================================== 2025-09-23 07:51:21.759553 | orchestrator | Create admin user ------------------------------------------------------ 52.00s 2025-09-23 07:51:21.759563 | orchestrator | Restart ceph manager service ------------------------------------------- 24.15s 2025-09-23 07:51:21.759574 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.94s 2025-09-23 07:51:21.759585 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.82s 2025-09-23 07:51:21.759595 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.40s 2025-09-23 07:51:21.759616 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.35s 2025-09-23 07:51:21.759627 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.33s 2025-09-23 07:51:21.759638 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.18s 2025-09-23 07:51:21.759648 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.17s 2025-09-23 07:51:21.759659 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 0.95s 2025-09-23 07:51:21.759670 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.18s 2025-09-23 07:51:21.759681 | orchestrator | 2025-09-23 07:51:21.759691 | orchestrator | 2025-09-23 07:51:21.759703 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-23 07:51:21.759714 | orchestrator | 2025-09-23 07:51:21.759724 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-23 07:51:21.759736 | orchestrator | Tuesday 23 September 2025 07:49:20 +0000 (0:00:00.354) 0:00:00.354 ***** 2025-09-23 07:51:21.759746 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:51:21.759757 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:51:21.759768 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:51:21.759779 | orchestrator | 2025-09-23 07:51:21.759789 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-23 07:51:21.759800 | orchestrator | Tuesday 23 September 2025 07:49:20 +0000 (0:00:00.452) 0:00:00.807 ***** 2025-09-23 07:51:21.759811 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-09-23 07:51:21.759821 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-09-23 07:51:21.759832 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-09-23 07:51:21.759843 | orchestrator | 2025-09-23 07:51:21.759854 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-09-23 07:51:21.759864 | orchestrator | 2025-09-23 07:51:21.759875 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-23 07:51:21.759886 | orchestrator | Tuesday 23 September 2025 07:49:20 +0000 (0:00:00.381) 0:00:01.189 ***** 2025-09-23 07:51:21.759897 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:51:21.759909 | orchestrator | 2025-09-23 07:51:21.759920 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-09-23 07:51:21.759931 | orchestrator | Tuesday 23 September 2025 07:49:21 +0000 (0:00:00.560) 0:00:01.750 ***** 2025-09-23 07:51:21.759942 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-09-23 07:51:21.759952 | orchestrator | 2025-09-23 07:51:21.759963 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-09-23 07:51:21.759985 | orchestrator | Tuesday 23 September 2025 07:49:25 +0000 (0:00:04.131) 0:00:05.881 ***** 2025-09-23 07:51:21.759998 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-09-23 07:51:21.760009 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-09-23 07:51:21.760020 | orchestrator | 2025-09-23 07:51:21.760031 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-09-23 07:51:21.760042 | orchestrator | Tuesday 23 September 2025 07:49:32 +0000 (0:00:06.984) 0:00:12.866 ***** 2025-09-23 07:51:21.760078 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-09-23 07:51:21.760091 | orchestrator | 2025-09-23 07:51:21.760102 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-09-23 07:51:21.760113 | orchestrator | Tuesday 23 September 2025 07:49:36 +0000 (0:00:03.551) 0:00:16.418 ***** 2025-09-23 07:51:21.760124 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-23 07:51:21.760142 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-09-23 07:51:21.760153 | orchestrator | 2025-09-23 07:51:21.760164 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-09-23 07:51:21.760189 | orchestrator | Tuesday 23 September 2025 07:49:40 +0000 (0:00:04.277) 0:00:20.695 ***** 2025-09-23 07:51:21.760200 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-23 07:51:21.760212 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-09-23 07:51:21.760222 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-09-23 07:51:21.760234 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-09-23 07:51:21.760245 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-09-23 07:51:21.760256 | orchestrator | 2025-09-23 07:51:21.760266 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-09-23 07:51:21.760278 | orchestrator | Tuesday 23 September 2025 07:49:56 +0000 (0:00:16.298) 0:00:36.994 ***** 2025-09-23 07:51:21.760288 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-09-23 07:51:21.760299 | orchestrator | 2025-09-23 07:51:21.760310 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-09-23 07:51:21.760323 | orchestrator | Tuesday 23 September 2025 07:50:00 +0000 (0:00:03.781) 0:00:40.776 ***** 2025-09-23 07:51:21.760346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-23 07:51:21.760371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-23 07:51:21.760451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-23 07:51:21.760486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-23 07:51:21.760511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-23 07:51:21.760523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-23 07:51:21.760534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-23 07:51:21.760546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-23 07:51:21.760557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-23 07:51:21.760568 | orchestrator | 2025-09-23 07:51:21.760580 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-09-23 07:51:21.760599 | orchestrator | Tuesday 23 September 2025 07:50:02 +0000 (0:00:02.097) 0:00:42.873 ***** 2025-09-23 07:51:21.760618 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-09-23 07:51:21.760629 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-09-23 07:51:21.760640 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-09-23 07:51:21.760651 | orchestrator | 2025-09-23 07:51:21.760662 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-09-23 07:51:21.760673 | orchestrator | Tuesday 23 September 2025 07:50:03 +0000 (0:00:01.399) 0:00:44.273 ***** 2025-09-23 07:51:21.760684 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:51:21.760695 | orchestrator | 2025-09-23 07:51:21.760706 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-09-23 07:51:21.760717 | orchestrator | Tuesday 23 September 2025 07:50:04 +0000 (0:00:00.187) 0:00:44.460 ***** 2025-09-23 07:51:21.760728 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:51:21.760739 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:51:21.760755 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:51:21.760766 | orchestrator | 2025-09-23 07:51:21.760777 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-23 07:51:21.760788 | orchestrator | Tuesday 23 September 2025 07:50:05 +0000 (0:00:00.927) 0:00:45.388 ***** 2025-09-23 07:51:21.760799 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:51:21.760809 | orchestrator | 2025-09-23 07:51:21.760820 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-09-23 07:51:21.760831 | orchestrator | Tuesday 23 September 2025 07:50:05 +0000 (0:00:00.851) 0:00:46.239 ***** 2025-09-23 07:51:21.760843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-23 07:51:21.760855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-23 07:51:21.760867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-23 07:51:21.760895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-23 07:51:21.760912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-23 07:51:21.760924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-23 07:51:21.760935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-23 07:51:21.760946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-23 07:51:21.760957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-23 07:51:21.760974 | orchestrator | 2025-09-23 07:51:21.760985 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-09-23 07:51:21.760996 | orchestrator | Tuesday 23 September 2025 07:50:09 +0000 (0:00:03.336) 0:00:49.576 ***** 2025-09-23 07:51:21.761015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-23 07:51:21.761028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-23 07:51:21.761104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-23 07:51:21.761123 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:51:21.761135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-23 07:51:21.761147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-23 07:51:21.761168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-23 07:51:21.761179 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:51:21.761204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-23 07:51:21.761217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-23 07:51:21.761229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-23 07:51:21.761240 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:51:21.761252 | orchestrator | 2025-09-23 07:51:21.761262 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-09-23 07:51:21.761273 | orchestrator | Tuesday 23 September 2025 07:50:11 +0000 (0:00:02.024) 0:00:51.600 ***** 2025-09-23 07:51:21.761284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-23 07:51:21.761302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-23 07:51:21.761321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-23 07:51:21.761333 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:51:21.761349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-23 07:51:21.761361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-23 07:51:21.761373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-23 07:51:21.761390 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:51:21.761401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-23 07:51:21.761426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-23 07:51:21.761447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-23 07:51:21.761474 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:51:21.761493 | orchestrator | 2025-09-23 07:51:21.761510 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-09-23 07:51:21.761522 | orchestrator | Tuesday 23 September 2025 07:50:13 +0000 (0:00:01.985) 0:00:53.585 ***** 2025-09-23 07:51:21.761533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-23 07:51:21.761825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-23 07:51:21.761862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-23 07:51:21.761874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-23 07:51:21.761892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-23 07:51:21.761904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-23 07:51:21.761916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-23 07:51:21.761934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-23 07:51:21.761952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-23 07:51:21.761963 | orchestrator | 2025-09-23 07:51:21.761975 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-09-23 07:51:21.761986 | orchestrator | Tuesday 23 September 2025 07:50:17 +0000 (0:00:03.831) 0:00:57.417 ***** 2025-09-23 07:51:21.761997 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:51:21.762008 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:51:21.762095 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:51:21.762108 | orchestrator | 2025-09-23 07:51:21.762118 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-09-23 07:51:21.762129 | orchestrator | Tuesday 23 September 2025 07:50:19 +0000 (0:00:02.883) 0:01:00.300 ***** 2025-09-23 07:51:21.762140 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-23 07:51:21.762151 | orchestrator | 2025-09-23 07:51:21.762162 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-09-23 07:51:21.762173 | orchestrator | Tuesday 23 September 2025 07:50:21 +0000 (0:00:01.335) 0:01:01.636 ***** 2025-09-23 07:51:21.762184 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:51:21.762195 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:51:21.762206 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:51:21.762216 | orchestrator | 2025-09-23 07:51:21.762227 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-09-23 07:51:21.762238 | orchestrator | Tuesday 23 September 2025 07:50:22 +0000 (0:00:00.688) 0:01:02.325 ***** 2025-09-23 07:51:21.762254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-23 07:51:21.762267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-23 07:51:21.762294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-23 07:51:21.762306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-23 07:51:21.762317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-23 07:51:21.762332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-23 07:51:21.762344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-23 07:51:21.762361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-23 07:51:21.762373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-23 07:51:21.762384 | orchestrator | 2025-09-23 07:51:21.762395 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-09-23 07:51:21.762411 | orchestrator | Tuesday 23 September 2025 07:50:32 +0000 (0:00:10.957) 0:01:13.283 ***** 2025-09-23 07:51:21.762425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-23 07:51:21.762439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-23 07:51:21.762456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-23 07:51:21.762469 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:51:21.762482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-23 07:51:21.762501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-23 07:51:21.762520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-23 07:51:21.762532 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:51:21.762543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-23 07:51:21.762554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-23 07:51:21.762570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-23 07:51:21.762587 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:51:21.762598 | orchestrator | 2025-09-23 07:51:21.762609 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-09-23 07:51:21.762620 | orchestrator | Tuesday 23 September 2025 07:50:33 +0000 (0:00:00.901) 0:01:14.184 ***** 2025-09-23 07:51:21.762631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-23 07:51:21.762649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-23 07:51:21.762660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-23 07:51:21.762671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-23 07:51:21.762687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-23 07:51:21.762705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-23 07:51:21.762716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-23 07:51:21.762734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-23 07:51:21.762746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-23 07:51:21.762758 | orchestrator | 2025-09-23 07:51:21.762769 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-23 07:51:21.762780 | orchestrator | Tuesday 23 September 2025 07:50:38 +0000 (0:00:04.270) 0:01:18.455 ***** 2025-09-23 07:51:21.762791 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:51:21.762801 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:51:21.762813 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:51:21.762823 | orchestrator | 2025-09-23 07:51:21.762834 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-09-23 07:51:21.762845 | orchestrator | Tuesday 23 September 2025 07:50:38 +0000 (0:00:00.615) 0:01:19.070 ***** 2025-09-23 07:51:21.762856 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:51:21.762867 | orchestrator | 2025-09-23 07:51:21.762877 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-09-23 07:51:21.762888 | orchestrator | Tuesday 23 September 2025 07:50:40 +0000 (0:00:02.158) 0:01:21.229 ***** 2025-09-23 07:51:21.762906 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:51:21.762916 | orchestrator | 2025-09-23 07:51:21.762927 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-09-23 07:51:21.762937 | orchestrator | Tuesday 23 September 2025 07:50:43 +0000 (0:00:02.354) 0:01:23.583 ***** 2025-09-23 07:51:21.762948 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:51:21.762959 | orchestrator | 2025-09-23 07:51:21.762969 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-23 07:51:21.762980 | orchestrator | Tuesday 23 September 2025 07:50:55 +0000 (0:00:12.060) 0:01:35.644 ***** 2025-09-23 07:51:21.762991 | orchestrator | 2025-09-23 07:51:21.763002 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-23 07:51:21.763016 | orchestrator | Tuesday 23 September 2025 07:50:55 +0000 (0:00:00.200) 0:01:35.845 ***** 2025-09-23 07:51:21.763027 | orchestrator | 2025-09-23 07:51:21.763038 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-23 07:51:21.763048 | orchestrator | Tuesday 23 September 2025 07:50:55 +0000 (0:00:00.188) 0:01:36.033 ***** 2025-09-23 07:51:21.763073 | orchestrator | 2025-09-23 07:51:21.763084 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-09-23 07:51:21.763095 | orchestrator | Tuesday 23 September 2025 07:50:55 +0000 (0:00:00.131) 0:01:36.165 ***** 2025-09-23 07:51:21.763106 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:51:21.763117 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:51:21.763128 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:51:21.763138 | orchestrator | 2025-09-23 07:51:21.763149 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-09-23 07:51:21.763160 | orchestrator | Tuesday 23 September 2025 07:51:02 +0000 (0:00:06.795) 0:01:42.961 ***** 2025-09-23 07:51:21.763171 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:51:21.763181 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:51:21.763192 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:51:21.763203 | orchestrator | 2025-09-23 07:51:21.763214 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-09-23 07:51:21.763224 | orchestrator | Tuesday 23 September 2025 07:51:13 +0000 (0:00:10.476) 0:01:53.437 ***** 2025-09-23 07:51:21.763235 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:51:21.763246 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:51:21.763257 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:51:21.763267 | orchestrator | 2025-09-23 07:51:21.763278 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:51:21.763289 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-23 07:51:21.763300 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-23 07:51:21.763311 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-23 07:51:21.763322 | orchestrator | 2025-09-23 07:51:21.763333 | orchestrator | 2025-09-23 07:51:21.763343 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:51:21.763354 | orchestrator | Tuesday 23 September 2025 07:51:19 +0000 (0:00:06.542) 0:01:59.979 ***** 2025-09-23 07:51:21.763365 | orchestrator | =============================================================================== 2025-09-23 07:51:21.763376 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.30s 2025-09-23 07:51:21.763392 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.06s 2025-09-23 07:51:21.763403 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 10.96s 2025-09-23 07:51:21.763414 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 10.48s 2025-09-23 07:51:21.763425 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.98s 2025-09-23 07:51:21.763446 | orchestrator | barbican : Restart barbican-api container ------------------------------- 6.80s 2025-09-23 07:51:21.763456 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 6.54s 2025-09-23 07:51:21.763467 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.28s 2025-09-23 07:51:21.763478 | orchestrator | barbican : Check barbican containers ------------------------------------ 4.27s 2025-09-23 07:51:21.763489 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 4.13s 2025-09-23 07:51:21.763500 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.83s 2025-09-23 07:51:21.763510 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.78s 2025-09-23 07:51:21.763521 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.55s 2025-09-23 07:51:21.763532 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.34s 2025-09-23 07:51:21.763542 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.88s 2025-09-23 07:51:21.763553 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.35s 2025-09-23 07:51:21.763563 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.16s 2025-09-23 07:51:21.763574 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.10s 2025-09-23 07:51:21.763585 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 2.02s 2025-09-23 07:51:21.763596 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.99s 2025-09-23 07:51:21.763607 | orchestrator | 2025-09-23 07:51:21 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:51:24.783270 | orchestrator | 2025-09-23 07:51:24 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:51:24.783481 | orchestrator | 2025-09-23 07:51:24 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:51:24.784046 | orchestrator | 2025-09-23 07:51:24 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:51:24.784717 | orchestrator | 2025-09-23 07:51:24 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:51:24.784801 | orchestrator | 2025-09-23 07:51:24 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:51:27.820160 | orchestrator | 2025-09-23 07:51:27 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:51:27.820600 | orchestrator | 2025-09-23 07:51:27 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:51:27.821682 | orchestrator | 2025-09-23 07:51:27 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:51:27.822966 | orchestrator | 2025-09-23 07:51:27 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:51:27.822998 | orchestrator | 2025-09-23 07:51:27 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:51:30.846671 | orchestrator | 2025-09-23 07:51:30 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:51:30.846902 | orchestrator | 2025-09-23 07:51:30 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:51:30.847690 | orchestrator | 2025-09-23 07:51:30 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:51:30.848194 | orchestrator | 2025-09-23 07:51:30 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:51:30.848236 | orchestrator | 2025-09-23 07:51:30 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:51:33.869509 | orchestrator | 2025-09-23 07:51:33 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:51:33.869823 | orchestrator | 2025-09-23 07:51:33 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:51:33.870450 | orchestrator | 2025-09-23 07:51:33 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:51:33.871186 | orchestrator | 2025-09-23 07:51:33 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:51:33.871211 | orchestrator | 2025-09-23 07:51:33 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:51:36.892657 | orchestrator | 2025-09-23 07:51:36 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:51:36.892898 | orchestrator | 2025-09-23 07:51:36 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:51:36.893641 | orchestrator | 2025-09-23 07:51:36 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:51:36.894416 | orchestrator | 2025-09-23 07:51:36 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:51:36.894437 | orchestrator | 2025-09-23 07:51:36 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:51:39.923670 | orchestrator | 2025-09-23 07:51:39 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:51:39.925591 | orchestrator | 2025-09-23 07:51:39 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:51:39.926348 | orchestrator | 2025-09-23 07:51:39 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:51:39.927277 | orchestrator | 2025-09-23 07:51:39 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:51:39.927302 | orchestrator | 2025-09-23 07:51:39 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:51:42.960433 | orchestrator | 2025-09-23 07:51:42 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:51:42.960988 | orchestrator | 2025-09-23 07:51:42 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:51:42.964102 | orchestrator | 2025-09-23 07:51:42 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:51:42.966806 | orchestrator | 2025-09-23 07:51:42 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:51:42.966848 | orchestrator | 2025-09-23 07:51:42 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:51:45.999804 | orchestrator | 2025-09-23 07:51:45 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:51:46.000308 | orchestrator | 2025-09-23 07:51:45 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:51:46.000483 | orchestrator | 2025-09-23 07:51:46 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:51:46.002274 | orchestrator | 2025-09-23 07:51:46 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:51:46.002308 | orchestrator | 2025-09-23 07:51:46 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:51:49.035373 | orchestrator | 2025-09-23 07:51:49 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:51:49.035525 | orchestrator | 2025-09-23 07:51:49 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:51:49.036094 | orchestrator | 2025-09-23 07:51:49 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:51:49.036739 | orchestrator | 2025-09-23 07:51:49 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:51:49.036805 | orchestrator | 2025-09-23 07:51:49 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:51:52.088360 | orchestrator | 2025-09-23 07:51:52 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:51:52.089501 | orchestrator | 2025-09-23 07:51:52 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:51:52.091201 | orchestrator | 2025-09-23 07:51:52 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:51:52.093169 | orchestrator | 2025-09-23 07:51:52 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:51:52.093200 | orchestrator | 2025-09-23 07:51:52 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:51:55.197147 | orchestrator | 2025-09-23 07:51:55 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:51:55.197232 | orchestrator | 2025-09-23 07:51:55 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:51:55.197242 | orchestrator | 2025-09-23 07:51:55 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:51:55.197250 | orchestrator | 2025-09-23 07:51:55 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:51:55.197258 | orchestrator | 2025-09-23 07:51:55 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:51:58.245242 | orchestrator | 2025-09-23 07:51:58 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:51:58.246390 | orchestrator | 2025-09-23 07:51:58 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:51:58.246431 | orchestrator | 2025-09-23 07:51:58 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:51:58.247181 | orchestrator | 2025-09-23 07:51:58 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:51:58.247215 | orchestrator | 2025-09-23 07:51:58 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:52:01.290858 | orchestrator | 2025-09-23 07:52:01 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:52:01.291443 | orchestrator | 2025-09-23 07:52:01 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:52:01.294400 | orchestrator | 2025-09-23 07:52:01 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:52:01.295242 | orchestrator | 2025-09-23 07:52:01 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:52:01.295283 | orchestrator | 2025-09-23 07:52:01 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:52:04.335315 | orchestrator | 2025-09-23 07:52:04 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:52:04.337971 | orchestrator | 2025-09-23 07:52:04 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:52:04.339144 | orchestrator | 2025-09-23 07:52:04 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:52:04.340455 | orchestrator | 2025-09-23 07:52:04 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:52:04.340482 | orchestrator | 2025-09-23 07:52:04 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:52:07.378278 | orchestrator | 2025-09-23 07:52:07 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:52:07.378689 | orchestrator | 2025-09-23 07:52:07 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:52:07.380222 | orchestrator | 2025-09-23 07:52:07 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:52:07.381391 | orchestrator | 2025-09-23 07:52:07 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:52:07.381415 | orchestrator | 2025-09-23 07:52:07 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:52:10.416291 | orchestrator | 2025-09-23 07:52:10 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:52:10.417178 | orchestrator | 2025-09-23 07:52:10 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:52:10.418502 | orchestrator | 2025-09-23 07:52:10 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:52:10.419735 | orchestrator | 2025-09-23 07:52:10 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:52:10.419758 | orchestrator | 2025-09-23 07:52:10 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:52:13.460953 | orchestrator | 2025-09-23 07:52:13 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:52:13.461070 | orchestrator | 2025-09-23 07:52:13 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:52:13.461596 | orchestrator | 2025-09-23 07:52:13 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:52:13.462360 | orchestrator | 2025-09-23 07:52:13 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:52:13.462393 | orchestrator | 2025-09-23 07:52:13 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:52:16.513387 | orchestrator | 2025-09-23 07:52:16 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:52:16.513751 | orchestrator | 2025-09-23 07:52:16 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:52:16.514374 | orchestrator | 2025-09-23 07:52:16 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:52:16.515439 | orchestrator | 2025-09-23 07:52:16 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:52:16.515862 | orchestrator | 2025-09-23 07:52:16 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:52:19.559426 | orchestrator | 2025-09-23 07:52:19 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:52:19.563900 | orchestrator | 2025-09-23 07:52:19 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:52:19.565672 | orchestrator | 2025-09-23 07:52:19 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:52:19.569311 | orchestrator | 2025-09-23 07:52:19 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:52:19.569402 | orchestrator | 2025-09-23 07:52:19 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:52:22.612790 | orchestrator | 2025-09-23 07:52:22 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:52:22.614276 | orchestrator | 2025-09-23 07:52:22 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:52:22.617310 | orchestrator | 2025-09-23 07:52:22 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:52:22.618269 | orchestrator | 2025-09-23 07:52:22 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:52:22.618301 | orchestrator | 2025-09-23 07:52:22 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:52:25.660647 | orchestrator | 2025-09-23 07:52:25 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:52:25.662107 | orchestrator | 2025-09-23 07:52:25 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:52:25.663397 | orchestrator | 2025-09-23 07:52:25 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:52:25.664591 | orchestrator | 2025-09-23 07:52:25 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:52:25.664648 | orchestrator | 2025-09-23 07:52:25 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:52:28.706266 | orchestrator | 2025-09-23 07:52:28 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:52:28.707350 | orchestrator | 2025-09-23 07:52:28 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:52:28.708881 | orchestrator | 2025-09-23 07:52:28 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state STARTED 2025-09-23 07:52:28.710273 | orchestrator | 2025-09-23 07:52:28 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:52:28.710642 | orchestrator | 2025-09-23 07:52:28 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:52:31.781746 | orchestrator | 2025-09-23 07:52:31 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:52:31.781842 | orchestrator | 2025-09-23 07:52:31 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:52:31.784051 | orchestrator | 2025-09-23 07:52:31 | INFO  | Task 6a45c0bb-5cb7-4f09-858c-dbcb7ccf5bc6 is in state SUCCESS 2025-09-23 07:52:31.786646 | orchestrator | 2025-09-23 07:52:31.786676 | orchestrator | 2025-09-23 07:52:31.786688 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-23 07:52:31.786700 | orchestrator | 2025-09-23 07:52:31.786711 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-23 07:52:31.786722 | orchestrator | Tuesday 23 September 2025 07:49:20 +0000 (0:00:00.523) 0:00:00.523 ***** 2025-09-23 07:52:31.786733 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:52:31.786745 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:52:31.786756 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:52:31.786843 | orchestrator | 2025-09-23 07:52:31.786857 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-23 07:52:31.786868 | orchestrator | Tuesday 23 September 2025 07:49:21 +0000 (0:00:00.351) 0:00:00.875 ***** 2025-09-23 07:52:31.786880 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-09-23 07:52:31.786892 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-09-23 07:52:31.786903 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-09-23 07:52:31.786914 | orchestrator | 2025-09-23 07:52:31.786925 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-09-23 07:52:31.786936 | orchestrator | 2025-09-23 07:52:31.786947 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-23 07:52:31.786958 | orchestrator | Tuesday 23 September 2025 07:49:21 +0000 (0:00:00.605) 0:00:01.480 ***** 2025-09-23 07:52:31.787124 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:52:31.787140 | orchestrator | 2025-09-23 07:52:31.787151 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-09-23 07:52:31.787162 | orchestrator | Tuesday 23 September 2025 07:49:22 +0000 (0:00:00.895) 0:00:02.376 ***** 2025-09-23 07:52:31.787173 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-09-23 07:52:31.787183 | orchestrator | 2025-09-23 07:52:31.787194 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-09-23 07:52:31.787205 | orchestrator | Tuesday 23 September 2025 07:49:26 +0000 (0:00:04.372) 0:00:06.749 ***** 2025-09-23 07:52:31.787216 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-09-23 07:52:31.787274 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-09-23 07:52:31.787287 | orchestrator | 2025-09-23 07:52:31.787299 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-09-23 07:52:31.787311 | orchestrator | Tuesday 23 September 2025 07:49:33 +0000 (0:00:06.730) 0:00:13.479 ***** 2025-09-23 07:52:31.787324 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-23 07:52:31.787336 | orchestrator | 2025-09-23 07:52:31.787348 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-09-23 07:52:31.787361 | orchestrator | Tuesday 23 September 2025 07:49:37 +0000 (0:00:03.641) 0:00:17.121 ***** 2025-09-23 07:52:31.787373 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-23 07:52:31.787385 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-09-23 07:52:31.787397 | orchestrator | 2025-09-23 07:52:31.787409 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-09-23 07:52:31.787421 | orchestrator | Tuesday 23 September 2025 07:49:41 +0000 (0:00:04.376) 0:00:21.497 ***** 2025-09-23 07:52:31.787434 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-23 07:52:31.787446 | orchestrator | 2025-09-23 07:52:31.787459 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-09-23 07:52:31.787471 | orchestrator | Tuesday 23 September 2025 07:49:45 +0000 (0:00:03.364) 0:00:24.862 ***** 2025-09-23 07:52:31.787483 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-09-23 07:52:31.787495 | orchestrator | 2025-09-23 07:52:31.787507 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-09-23 07:52:31.787519 | orchestrator | Tuesday 23 September 2025 07:49:49 +0000 (0:00:04.601) 0:00:29.464 ***** 2025-09-23 07:52:31.787535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-23 07:52:31.787650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-23 07:52:31.787669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-23 07:52:31.787853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-23 07:52:31.787867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-23 07:52:31.787878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-23 07:52:31.787890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.787916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.787929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.787946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.787958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.787969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.787981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.788043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.788068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.788081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.788099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.788111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.788122 | orchestrator | 2025-09-23 07:52:31.788133 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-09-23 07:52:31.788144 | orchestrator | Tuesday 23 September 2025 07:49:52 +0000 (0:00:02.999) 0:00:32.464 ***** 2025-09-23 07:52:31.788155 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:52:31.788166 | orchestrator | 2025-09-23 07:52:31.788177 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-09-23 07:52:31.788187 | orchestrator | Tuesday 23 September 2025 07:49:52 +0000 (0:00:00.120) 0:00:32.584 ***** 2025-09-23 07:52:31.788198 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:52:31.788209 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:52:31.788219 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:52:31.788230 | orchestrator | 2025-09-23 07:52:31.788254 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-23 07:52:31.788274 | orchestrator | Tuesday 23 September 2025 07:49:53 +0000 (0:00:00.272) 0:00:32.856 ***** 2025-09-23 07:52:31.788285 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:52:31.788296 | orchestrator | 2025-09-23 07:52:31.788307 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-09-23 07:52:31.788318 | orchestrator | Tuesday 23 September 2025 07:49:53 +0000 (0:00:00.587) 0:00:33.444 ***** 2025-09-23 07:52:31.788329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-23 07:52:31.788360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-23 07:52:31.788372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-23 07:52:31.788384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-23 07:52:31.788395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-23 07:52:31.788406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-23 07:52:31.788422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.788445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.788457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.788468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.788480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.788491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.788502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.788523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.788588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.788601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.788612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.788623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.788634 | orchestrator | 2025-09-23 07:52:31.788645 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-09-23 07:52:31.788656 | orchestrator | Tuesday 23 September 2025 07:49:59 +0000 (0:00:05.730) 0:00:39.175 ***** 2025-09-23 07:52:31.788667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-23 07:52:31.788703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-23 07:52:31.788715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-23 07:52:31.788726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-23 07:52:31.788738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.788749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.788760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.788777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.789470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.789497 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:52:31.789509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.789521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.789532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.789544 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:52:31.789555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-23 07:52:31.789579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-23 07:52:31.789605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.789618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.789636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.789655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.789675 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:52:31.789721 | orchestrator | 2025-09-23 07:52:31.789740 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-09-23 07:52:31.789757 | orchestrator | Tuesday 23 September 2025 07:50:00 +0000 (0:00:00.879) 0:00:40.054 ***** 2025-09-23 07:52:31.789775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-23 07:52:31.789806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-23 07:52:31.789840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.789859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.789877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.789895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.789914 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:52:31.789932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-23 07:52:31.789962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-23 07:52:31.790085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.790109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.790122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.790135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.790147 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:52:31.790160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-23 07:52:31.790181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-23 07:52:31.790199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.790229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.790242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.790255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.790267 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:52:31.790279 | orchestrator | 2025-09-23 07:52:31.790292 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-09-23 07:52:31.790304 | orchestrator | Tuesday 23 September 2025 07:50:02 +0000 (0:00:02.623) 0:00:42.678 ***** 2025-09-23 07:52:31.790317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-23 07:52:31.790338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-23 07:52:31.790363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-23 07:52:31.790377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-23 07:52:31.790390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-23 07:52:31.790402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-23 07:52:31.790422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.790434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.790459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.790471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.790482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.790494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.790510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.790521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.790532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.790553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.790565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.790576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.790587 | orchestrator | 2025-09-23 07:52:31.790598 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-09-23 07:52:31.790609 | orchestrator | Tuesday 23 September 2025 07:50:08 +0000 (0:00:06.084) 0:00:48.762 ***** 2025-09-23 07:52:31.790626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-23 07:52:31.790637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-23 07:52:31.790653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-23 07:52:31.790670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-23 07:52:31.790681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-23 07:52:31.790693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-23 07:52:31.790709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.790720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.790731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.790752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.790764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.790775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.790792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.790803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.790814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.790825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.790847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.790859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.790870 | orchestrator | 2025-09-23 07:52:31.790880 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-09-23 07:52:31.790897 | orchestrator | Tuesday 23 September 2025 07:50:30 +0000 (0:00:21.487) 0:01:10.250 ***** 2025-09-23 07:52:31.790907 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-23 07:52:31.790918 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-23 07:52:31.790929 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-23 07:52:31.790939 | orchestrator | 2025-09-23 07:52:31.790950 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-09-23 07:52:31.790960 | orchestrator | Tuesday 23 September 2025 07:50:36 +0000 (0:00:06.195) 0:01:16.445 ***** 2025-09-23 07:52:31.790971 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-23 07:52:31.790981 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-23 07:52:31.791011 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-23 07:52:31.791022 | orchestrator | 2025-09-23 07:52:31.791032 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-09-23 07:52:31.791043 | orchestrator | Tuesday 23 September 2025 07:50:40 +0000 (0:00:03.957) 0:01:20.402 ***** 2025-09-23 07:52:31.791054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-23 07:52:31.791066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-23 07:52:31.791088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-23 07:52:31.791106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-23 07:52:31.791117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.791129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.791140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.791151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-23 07:52:31.791166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.791183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.791201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.791213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-23 07:52:31.791224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.791235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.791246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.791261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.791287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.791299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.791310 | orchestrator | 2025-09-23 07:52:31.791321 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-09-23 07:52:31.791332 | orchestrator | Tuesday 23 September 2025 07:50:44 +0000 (0:00:03.525) 0:01:23.928 ***** 2025-09-23 07:52:31.791343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-23 07:52:31.791354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-23 07:52:31.791370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-23 07:52:31.791393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-23 07:52:31.791405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.791416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.791427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.791438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-23 07:52:31.791449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.791465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.791488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.791500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-23 07:52:31.791511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.791523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.791534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.791545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.791572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.791584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.791595 | orchestrator | 2025-09-23 07:52:31.791605 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-23 07:52:31.791616 | orchestrator | Tuesday 23 September 2025 07:50:47 +0000 (0:00:03.569) 0:01:27.497 ***** 2025-09-23 07:52:31.791627 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:52:31.791638 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:52:31.791648 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:52:31.791659 | orchestrator | 2025-09-23 07:52:31.791670 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-09-23 07:52:31.791680 | orchestrator | Tuesday 23 September 2025 07:50:48 +0000 (0:00:00.594) 0:01:28.092 ***** 2025-09-23 07:52:31.791691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-23 07:52:31.791703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-23 07:52:31.791714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.791731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.791754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.791773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.791792 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:52:31.791812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-23 07:52:31.791831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-23 07:52:31.791851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.791881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.791919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.791933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.791944 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:52:31.791955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-23 07:52:31.791966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-23 07:52:31.791977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.792032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.792051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.792070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-23 07:52:31.792082 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:52:31.792093 | orchestrator | 2025-09-23 07:52:31.792104 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-09-23 07:52:31.792114 | orchestrator | Tuesday 23 September 2025 07:50:50 +0000 (0:00:02.548) 0:01:30.641 ***** 2025-09-23 07:52:31.792126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-23 07:52:31.792137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-23 07:52:31.792155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-23 07:52:31.792170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-23 07:52:31.792243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-23 07:52:31.792257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-23 07:52:31.792268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.792279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.792301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.792313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.792333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.792345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.792356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.792368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.792379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.792396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.792407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.792428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-23 07:52:31.792439 | orchestrator | 2025-09-23 07:52:31.792450 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-23 07:52:31.792461 | orchestrator | Tuesday 23 September 2025 07:50:55 +0000 (0:00:04.734) 0:01:35.376 ***** 2025-09-23 07:52:31.792471 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:52:31.792482 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:52:31.792493 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:52:31.792503 | orchestrator | 2025-09-23 07:52:31.792514 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-09-23 07:52:31.792524 | orchestrator | Tuesday 23 September 2025 07:50:56 +0000 (0:00:00.570) 0:01:35.947 ***** 2025-09-23 07:52:31.792535 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-09-23 07:52:31.792546 | orchestrator | 2025-09-23 07:52:31.792557 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-09-23 07:52:31.792567 | orchestrator | Tuesday 23 September 2025 07:50:58 +0000 (0:00:02.185) 0:01:38.132 ***** 2025-09-23 07:52:31.792578 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-23 07:52:31.792588 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-09-23 07:52:31.792599 | orchestrator | 2025-09-23 07:52:31.792610 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-09-23 07:52:31.792620 | orchestrator | Tuesday 23 September 2025 07:51:01 +0000 (0:00:02.912) 0:01:41.045 ***** 2025-09-23 07:52:31.792631 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:52:31.792641 | orchestrator | 2025-09-23 07:52:31.792652 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-23 07:52:31.792662 | orchestrator | Tuesday 23 September 2025 07:51:16 +0000 (0:00:14.826) 0:01:55.871 ***** 2025-09-23 07:52:31.792679 | orchestrator | 2025-09-23 07:52:31.792690 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-23 07:52:31.792700 | orchestrator | Tuesday 23 September 2025 07:51:16 +0000 (0:00:00.567) 0:01:56.438 ***** 2025-09-23 07:52:31.792711 | orchestrator | 2025-09-23 07:52:31.792721 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-23 07:52:31.792732 | orchestrator | Tuesday 23 September 2025 07:51:16 +0000 (0:00:00.098) 0:01:56.537 ***** 2025-09-23 07:52:31.792743 | orchestrator | 2025-09-23 07:52:31.792753 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-09-23 07:52:31.792764 | orchestrator | Tuesday 23 September 2025 07:51:16 +0000 (0:00:00.133) 0:01:56.671 ***** 2025-09-23 07:52:31.792774 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:52:31.792785 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:52:31.792796 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:52:31.792806 | orchestrator | 2025-09-23 07:52:31.792817 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-09-23 07:52:31.792828 | orchestrator | Tuesday 23 September 2025 07:51:33 +0000 (0:00:16.196) 0:02:12.867 ***** 2025-09-23 07:52:31.792838 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:52:31.792849 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:52:31.792859 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:52:31.792876 | orchestrator | 2025-09-23 07:52:31.792894 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-09-23 07:52:31.792912 | orchestrator | Tuesday 23 September 2025 07:51:45 +0000 (0:00:12.903) 0:02:25.771 ***** 2025-09-23 07:52:31.792930 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:52:31.792950 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:52:31.792969 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:52:31.793006 | orchestrator | 2025-09-23 07:52:31.793018 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-09-23 07:52:31.793029 | orchestrator | Tuesday 23 September 2025 07:51:53 +0000 (0:00:07.159) 0:02:32.930 ***** 2025-09-23 07:52:31.793040 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:52:31.793051 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:52:31.793062 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:52:31.793073 | orchestrator | 2025-09-23 07:52:31.793084 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-09-23 07:52:31.793095 | orchestrator | Tuesday 23 September 2025 07:51:59 +0000 (0:00:06.062) 0:02:38.993 ***** 2025-09-23 07:52:31.793105 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:52:31.793117 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:52:31.793128 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:52:31.793138 | orchestrator | 2025-09-23 07:52:31.793149 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-09-23 07:52:31.793161 | orchestrator | Tuesday 23 September 2025 07:52:12 +0000 (0:00:13.326) 0:02:52.320 ***** 2025-09-23 07:52:31.793171 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:52:31.793182 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:52:31.793193 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:52:31.793203 | orchestrator | 2025-09-23 07:52:31.793214 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-09-23 07:52:31.793225 | orchestrator | Tuesday 23 September 2025 07:52:21 +0000 (0:00:09.149) 0:03:01.470 ***** 2025-09-23 07:52:31.793236 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:52:31.793246 | orchestrator | 2025-09-23 07:52:31.793257 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:52:31.793268 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-23 07:52:31.793279 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-23 07:52:31.793296 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-23 07:52:31.793314 | orchestrator | 2025-09-23 07:52:31.793325 | orchestrator | 2025-09-23 07:52:31.793343 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:52:31.793354 | orchestrator | Tuesday 23 September 2025 07:52:28 +0000 (0:00:06.881) 0:03:08.351 ***** 2025-09-23 07:52:31.793365 | orchestrator | =============================================================================== 2025-09-23 07:52:31.793376 | orchestrator | designate : Copying over designate.conf -------------------------------- 21.49s 2025-09-23 07:52:31.793387 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 16.20s 2025-09-23 07:52:31.793398 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.83s 2025-09-23 07:52:31.793408 | orchestrator | designate : Restart designate-mdns container --------------------------- 13.33s 2025-09-23 07:52:31.793419 | orchestrator | designate : Restart designate-api container ---------------------------- 12.90s 2025-09-23 07:52:31.793429 | orchestrator | designate : Restart designate-worker container -------------------------- 9.15s 2025-09-23 07:52:31.793440 | orchestrator | designate : Restart designate-central container ------------------------- 7.16s 2025-09-23 07:52:31.793451 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 6.88s 2025-09-23 07:52:31.793461 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.73s 2025-09-23 07:52:31.793472 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 6.20s 2025-09-23 07:52:31.793482 | orchestrator | designate : Copying over config.json files for services ----------------- 6.08s 2025-09-23 07:52:31.793493 | orchestrator | designate : Restart designate-producer container ------------------------ 6.06s 2025-09-23 07:52:31.793503 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.73s 2025-09-23 07:52:31.793514 | orchestrator | designate : Check designate containers ---------------------------------- 4.73s 2025-09-23 07:52:31.793524 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.60s 2025-09-23 07:52:31.793535 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.38s 2025-09-23 07:52:31.793545 | orchestrator | service-ks-register : designate | Creating services --------------------- 4.37s 2025-09-23 07:52:31.793556 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.96s 2025-09-23 07:52:31.793566 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.64s 2025-09-23 07:52:31.793577 | orchestrator | designate : Copying over rndc.key --------------------------------------- 3.57s 2025-09-23 07:52:31.793587 | orchestrator | 2025-09-23 07:52:31 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:52:31.793598 | orchestrator | 2025-09-23 07:52:31 | INFO  | Task 58cacd57-d645-4534-a0f2-51d8f8cf4f83 is in state STARTED 2025-09-23 07:52:31.793609 | orchestrator | 2025-09-23 07:52:31 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:52:34.832580 | orchestrator | 2025-09-23 07:52:34 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:52:34.835224 | orchestrator | 2025-09-23 07:52:34 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:52:34.837324 | orchestrator | 2025-09-23 07:52:34 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:52:34.839116 | orchestrator | 2025-09-23 07:52:34 | INFO  | Task 58cacd57-d645-4534-a0f2-51d8f8cf4f83 is in state STARTED 2025-09-23 07:52:34.839148 | orchestrator | 2025-09-23 07:52:34 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:52:37.878754 | orchestrator | 2025-09-23 07:52:37 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:52:37.880399 | orchestrator | 2025-09-23 07:52:37 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:52:37.884831 | orchestrator | 2025-09-23 07:52:37 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:52:37.887778 | orchestrator | 2025-09-23 07:52:37 | INFO  | Task 58cacd57-d645-4534-a0f2-51d8f8cf4f83 is in state STARTED 2025-09-23 07:52:37.887932 | orchestrator | 2025-09-23 07:52:37 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:52:40.939766 | orchestrator | 2025-09-23 07:52:40 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:52:40.939860 | orchestrator | 2025-09-23 07:52:40 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state STARTED 2025-09-23 07:52:40.941934 | orchestrator | 2025-09-23 07:52:40 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:52:40.942577 | orchestrator | 2025-09-23 07:52:40 | INFO  | Task 58cacd57-d645-4534-a0f2-51d8f8cf4f83 is in state STARTED 2025-09-23 07:52:40.942730 | orchestrator | 2025-09-23 07:52:40 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:52:43.978089 | orchestrator | 2025-09-23 07:52:43 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:52:43.980543 | orchestrator | 2025-09-23 07:52:43.980584 | orchestrator | 2025-09-23 07:52:43 | INFO  | Task a253f435-8f0b-4007-b35e-3ae20ef7d82b is in state SUCCESS 2025-09-23 07:52:43.982432 | orchestrator | 2025-09-23 07:52:43.982467 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-23 07:52:43.982475 | orchestrator | 2025-09-23 07:52:43.982481 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-23 07:52:43.982487 | orchestrator | Tuesday 23 September 2025 07:49:11 +0000 (0:00:00.357) 0:00:00.357 ***** 2025-09-23 07:52:43.982493 | orchestrator | ok: [testbed-manager] 2025-09-23 07:52:43.982499 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:52:43.982505 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:52:43.982510 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:52:43.982516 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:52:43.982521 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:52:43.982527 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:52:43.982533 | orchestrator | 2025-09-23 07:52:43.982539 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-23 07:52:43.982545 | orchestrator | Tuesday 23 September 2025 07:49:12 +0000 (0:00:01.039) 0:00:01.396 ***** 2025-09-23 07:52:43.982552 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-09-23 07:52:43.982558 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-09-23 07:52:43.982565 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-09-23 07:52:43.982573 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-09-23 07:52:43.982582 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-09-23 07:52:43.982590 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-09-23 07:52:43.982600 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-09-23 07:52:43.982608 | orchestrator | 2025-09-23 07:52:43.982617 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-09-23 07:52:43.982623 | orchestrator | 2025-09-23 07:52:43.982630 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-23 07:52:43.982636 | orchestrator | Tuesday 23 September 2025 07:49:13 +0000 (0:00:01.098) 0:00:02.495 ***** 2025-09-23 07:52:43.982642 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:52:43.982650 | orchestrator | 2025-09-23 07:52:43.982657 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-09-23 07:52:43.982663 | orchestrator | Tuesday 23 September 2025 07:49:15 +0000 (0:00:02.382) 0:00:04.877 ***** 2025-09-23 07:52:43.982690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-23 07:52:43.982701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-23 07:52:43.982711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.982720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.982747 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-23 07:52:43.982758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.982766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.982775 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-23 07:52:43.982789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-23 07:52:43.982797 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-23 07:52:43.982805 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-23 07:52:43.982814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.982832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.982841 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.982850 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-23 07:52:43.982863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.982873 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.982882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.982895 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-23 07:52:43.982922 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.982931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.982940 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.983006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.983016 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.983023 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.983030 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.983041 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.983055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.983065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.983076 | orchestrator | 2025-09-23 07:52:43.983084 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-23 07:52:43.983091 | orchestrator | Tuesday 23 September 2025 07:49:20 +0000 (0:00:04.362) 0:00:09.239 ***** 2025-09-23 07:52:43.983097 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:52:43.983107 | orchestrator | 2025-09-23 07:52:43.983115 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-09-23 07:52:43.983124 | orchestrator | Tuesday 23 September 2025 07:49:21 +0000 (0:00:01.694) 0:00:10.934 ***** 2025-09-23 07:52:43.983133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-23 07:52:43.983143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-23 07:52:43.983152 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-23 07:52:43.983162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-23 07:52:43.983178 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-23 07:52:43.983188 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-23 07:52:43.983203 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-23 07:52:43.983212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.983222 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-23 07:52:43.983231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.983241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.983251 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.983267 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.983277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.983291 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.983301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.983310 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.983320 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-23 07:52:43.983330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.983345 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.983363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.983372 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.983382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.983391 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.983399 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.983408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.983416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.983432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.983445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.983453 | orchestrator | 2025-09-23 07:52:43.983461 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-09-23 07:52:43.983469 | orchestrator | Tuesday 23 September 2025 07:49:27 +0000 (0:00:06.135) 0:00:17.069 ***** 2025-09-23 07:52:43.983478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-23 07:52:43.983486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:52:43.983495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:52:43.983504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-23 07:52:43.983512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:52:43.983520 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:52:43.983529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-23 07:52:43.983547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:52:43.983556 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-23 07:52:43.983566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:52:43.983575 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-23 07:52:43.983585 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-23 07:52:43.983614 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-23 07:52:43.983636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-23 07:52:43.983645 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:52:43.983654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:52:43.983663 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:52:43.983672 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:52:43.983680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-23 07:52:43.983689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:52:43.983698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:52:43.983707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-23 07:52:43.983720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:52:43.983729 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:52:43.983743 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-23 07:52:43.983752 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-23 07:52:43.983760 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-23 07:52:43.983769 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:52:43.983777 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-23 07:52:43.983786 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-23 07:52:43.983794 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-23 07:52:43.983807 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:52:43.983815 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-23 07:52:43.983826 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-23 07:52:43.983839 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-23 07:52:43.983848 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:52:43.983857 | orchestrator | 2025-09-23 07:52:43.983865 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-09-23 07:52:43.983874 | orchestrator | Tuesday 23 September 2025 07:49:29 +0000 (0:00:01.506) 0:00:18.576 ***** 2025-09-23 07:52:43.983883 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-23 07:52:43.983893 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-23 07:52:43.983902 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-23 07:52:43.983911 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-23 07:52:43.983927 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:52:43.983936 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:52:43.983949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-23 07:52:43.983959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:52:43.983968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:52:43.983991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-23 07:52:43.984001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:52:43.984014 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:52:43.984023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-23 07:52:43.984033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:52:43.984044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:52:43.984057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-23 07:52:43.984067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:52:43.984073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-23 07:52:43.984079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:52:43.984086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:52:43.984098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-23 07:52:43.984105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-23 07:52:43.984111 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:52:43.984120 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:52:43.984136 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-23 07:52:43.984146 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-23 07:52:43.984155 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-23 07:52:43.984164 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:52:43.984173 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-23 07:52:43.984183 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-23 07:52:43.984196 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-23 07:52:43.984205 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:52:43.984214 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-23 07:52:43.984223 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-23 07:52:43.984239 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-23 07:52:43.984247 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:52:43.984255 | orchestrator | 2025-09-23 07:52:43.984264 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-09-23 07:52:43.984272 | orchestrator | Tuesday 23 September 2025 07:49:31 +0000 (0:00:02.255) 0:00:20.832 ***** 2025-09-23 07:52:43.984280 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-23 07:52:43.984289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-23 07:52:43.984303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-23 07:52:43.984312 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-23 07:52:43.984321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-23 07:52:43.984330 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-23 07:52:43.984345 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-23 07:52:43.984353 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-23 07:52:43.984362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.984370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.984383 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.984392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.984400 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.984409 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.984424 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.984433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.984442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.984456 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.984465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.984475 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-23 07:52:43.984486 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.984502 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.984512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.984521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.984534 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.984543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.984552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.984562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.984571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.984579 | orchestrator | 2025-09-23 07:52:43.984588 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-09-23 07:52:43.984599 | orchestrator | Tuesday 23 September 2025 07:49:37 +0000 (0:00:06.114) 0:00:26.946 ***** 2025-09-23 07:52:43.984608 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-23 07:52:43.984617 | orchestrator | 2025-09-23 07:52:43.984626 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-09-23 07:52:43.984639 | orchestrator | Tuesday 23 September 2025 07:49:38 +0000 (0:00:01.091) 0:00:28.037 ***** 2025-09-23 07:52:43.984649 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1320906, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2731905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.984663 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1320906, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2731905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.984673 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1320941, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2782047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.984682 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1320941, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2782047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.984691 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1320906, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2731905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.984700 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1320906, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2731905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.984717 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1320906, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2731905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-23 07:52:43.984727 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1320906, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2731905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.984741 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1320906, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2731905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.984750 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1320893, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2715154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.984759 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1320893, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2715154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.984768 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1320941, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2782047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.984778 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1320941, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2782047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.984805 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1320941, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2782047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.984820 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1320930, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.275908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.984829 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1320893, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2715154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.984838 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1320941, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2782047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.984847 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1320893, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2715154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.984856 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1320889, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2686462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.984866 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1320893, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2715154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.984882 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1320930, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.275908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.984895 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1320930, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.275908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.984904 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1320911, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.273462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.984912 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1320893, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2715154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.984921 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1320930, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.275908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.984930 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1320889, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2686462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.984939 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1320927, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2756329, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.984951 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1320930, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.275908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.984968 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1320889, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2686462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.984996 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1320941, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2782047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-23 07:52:43.985005 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1320930, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.275908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985014 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1320889, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2686462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985023 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1320915, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.274589, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985032 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1320889, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2686462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985045 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1320911, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.273462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985065 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1320889, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2686462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985072 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1320902, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.271989, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985078 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1320911, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.273462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985084 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1320911, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.273462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985091 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1320911, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.273462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985098 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1320939, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2778914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985115 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1320927, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2756329, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985129 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1320882, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2677252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985138 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1320927, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2756329, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985147 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1320911, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.273462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985155 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1320927, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2756329, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985162 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1320927, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2756329, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985168 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1320915, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.274589, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985180 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1320915, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.274589, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985192 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1320915, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.274589, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985201 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1320915, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.274589, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985211 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1320955, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2803202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985220 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1320902, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.271989, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985229 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1320902, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.271989, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985238 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1320902, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.271989, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985251 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1320939, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2778914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985268 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1320927, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2756329, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985278 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1320902, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.271989, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985287 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1320882, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2677252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985296 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1320915, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.274589, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985305 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1320939, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2778914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985314 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1320939, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2778914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985326 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1320939, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2778914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985344 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1320937, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2766464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985353 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1320893, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2715154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-23 07:52:43.985362 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1320882, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2677252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985370 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1320955, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2803202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985379 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1320890, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2693355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985392 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1320955, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2803202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985401 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1320902, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.271989, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985417 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1320882, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2677252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985426 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1320886, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.268106, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985435 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1320937, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2766464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985443 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1320937, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2766464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985452 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1320890, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2693355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985464 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1320882, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2677252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985472 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1320886, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.268106, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985488 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1320955, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2803202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985497 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1320930, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.275908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-23 07:52:43.985507 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1320890, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2693355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985516 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1320939, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2778914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985526 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1320925, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2753227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985540 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1320925, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2753227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985549 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1320955, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2803202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985565 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1320920, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2748537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985575 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1320937, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2766464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985584 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1320886, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.268106, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985593 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1320937, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2766464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985607 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1320882, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2677252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985616 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1320920, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2748537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985625 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1320890, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2693355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985641 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1320953, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2799485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985651 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:52:43.985660 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1320953, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2799485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985669 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:52:43.985679 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1320925, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2753227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985688 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1320890, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2693355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985702 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1320955, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2803202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985711 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1320889, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2686462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-23 07:52:43.985721 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1320886, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.268106, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985739 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1320920, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2748537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985749 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1320937, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2766464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985758 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1320886, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.268106, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985767 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1320890, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2693355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985780 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1320925, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2753227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985789 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1320953, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2799485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985799 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:52:43.985808 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1320925, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2753227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985825 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1320920, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2748537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985833 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1320886, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.268106, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985842 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1320920, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2748537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985854 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1320953, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2799485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985863 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:52:43.985872 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1320911, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.273462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-23 07:52:43.985880 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1320925, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2753227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985889 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1320953, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2799485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985898 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:52:43.985913 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1320920, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2748537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985922 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1320953, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2799485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-23 07:52:43.985930 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:52:43.985938 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1320927, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2756329, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-23 07:52:43.985955 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1320915, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.274589, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-23 07:52:43.985963 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1320902, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.271989, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-23 07:52:43.986003 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1320939, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2778914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-23 07:52:43.986047 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1320882, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2677252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-23 07:52:43.986066 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1320955, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2803202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-23 07:52:43.986073 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1320937, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2766464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-23 07:52:43.986085 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1320890, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2693355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-23 07:52:43.986094 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1320886, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.268106, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-23 07:52:43.986103 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1320925, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2753227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-23 07:52:43.986112 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1320920, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2748537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-23 07:52:43.986122 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1320953, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2799485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-23 07:52:43.986131 | orchestrator | 2025-09-23 07:52:43.986140 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-09-23 07:52:43.986152 | orchestrator | Tuesday 23 September 2025 07:50:06 +0000 (0:00:27.304) 0:00:55.342 ***** 2025-09-23 07:52:43.986161 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-23 07:52:43.986171 | orchestrator | 2025-09-23 07:52:43.986514 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-09-23 07:52:43.986582 | orchestrator | Tuesday 23 September 2025 07:50:06 +0000 (0:00:00.767) 0:00:56.110 ***** 2025-09-23 07:52:43.986597 | orchestrator | [WARNING]: Skipped 2025-09-23 07:52:43.986610 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-23 07:52:43.986621 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-09-23 07:52:43.986632 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-23 07:52:43.986664 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-09-23 07:52:43.986685 | orchestrator | [WARNING]: Skipped 2025-09-23 07:52:43.986703 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-23 07:52:43.986721 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-09-23 07:52:43.986738 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-23 07:52:43.986755 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-09-23 07:52:43.986773 | orchestrator | [WARNING]: Skipped 2025-09-23 07:52:43.986790 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-23 07:52:43.986824 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-09-23 07:52:43.986844 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-23 07:52:43.986865 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-09-23 07:52:43.986886 | orchestrator | [WARNING]: Skipped 2025-09-23 07:52:43.986906 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-23 07:52:43.986928 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-09-23 07:52:43.986949 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-23 07:52:43.986967 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-09-23 07:52:43.987005 | orchestrator | [WARNING]: Skipped 2025-09-23 07:52:43.987017 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-23 07:52:43.987027 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-09-23 07:52:43.987038 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-23 07:52:43.987049 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-09-23 07:52:43.987060 | orchestrator | [WARNING]: Skipped 2025-09-23 07:52:43.987070 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-23 07:52:43.987081 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-09-23 07:52:43.987093 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-23 07:52:43.987105 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-09-23 07:52:43.987118 | orchestrator | [WARNING]: Skipped 2025-09-23 07:52:43.987130 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-23 07:52:43.987142 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-09-23 07:52:43.987154 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-23 07:52:43.987167 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-09-23 07:52:43.987180 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-23 07:52:43.987192 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-23 07:52:43.987203 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-23 07:52:43.987214 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-23 07:52:43.987224 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-23 07:52:43.987235 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-23 07:52:43.987246 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-23 07:52:43.987256 | orchestrator | 2025-09-23 07:52:43.987268 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-09-23 07:52:43.987278 | orchestrator | Tuesday 23 September 2025 07:50:08 +0000 (0:00:01.831) 0:00:57.942 ***** 2025-09-23 07:52:43.987289 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-23 07:52:43.987300 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:52:43.987311 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-23 07:52:43.987322 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:52:43.987333 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-23 07:52:43.987355 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:52:43.987366 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-23 07:52:43.987377 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:52:43.987387 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-23 07:52:43.987398 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:52:43.987409 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-23 07:52:43.987420 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:52:43.987430 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-09-23 07:52:43.987441 | orchestrator | 2025-09-23 07:52:43.987452 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-09-23 07:52:43.987473 | orchestrator | Tuesday 23 September 2025 07:50:38 +0000 (0:00:30.021) 0:01:27.963 ***** 2025-09-23 07:52:43.987484 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-23 07:52:43.987511 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:52:43.987523 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-23 07:52:43.987534 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:52:43.987545 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-23 07:52:43.987555 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:52:43.987566 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-23 07:52:43.987577 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:52:43.987588 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-23 07:52:43.987599 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:52:43.987610 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-23 07:52:43.987621 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:52:43.987632 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-09-23 07:52:43.987643 | orchestrator | 2025-09-23 07:52:43.987653 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-09-23 07:52:43.987664 | orchestrator | Tuesday 23 September 2025 07:50:41 +0000 (0:00:03.210) 0:01:31.174 ***** 2025-09-23 07:52:43.987675 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-23 07:52:43.987687 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-23 07:52:43.987698 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-23 07:52:43.987709 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-23 07:52:43.987720 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:52:43.987731 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:52:43.987741 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:52:43.987753 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:52:43.987772 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-23 07:52:43.987791 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:52:43.987809 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-09-23 07:52:43.987838 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-23 07:52:43.987856 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:52:43.987874 | orchestrator | 2025-09-23 07:52:43.987892 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-09-23 07:52:43.987909 | orchestrator | Tuesday 23 September 2025 07:50:44 +0000 (0:00:02.211) 0:01:33.385 ***** 2025-09-23 07:52:43.987927 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-23 07:52:43.987943 | orchestrator | 2025-09-23 07:52:43.987961 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-09-23 07:52:43.988004 | orchestrator | Tuesday 23 September 2025 07:50:45 +0000 (0:00:01.486) 0:01:34.872 ***** 2025-09-23 07:52:43.988025 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:52:43.988042 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:52:43.988061 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:52:43.988081 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:52:43.988099 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:52:43.988113 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:52:43.988123 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:52:43.988134 | orchestrator | 2025-09-23 07:52:43.988145 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-09-23 07:52:43.988155 | orchestrator | Tuesday 23 September 2025 07:50:46 +0000 (0:00:00.779) 0:01:35.651 ***** 2025-09-23 07:52:43.988166 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:52:43.988177 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:52:43.988187 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:52:43.988198 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:52:43.988208 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:52:43.988219 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:52:43.988230 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:52:43.988240 | orchestrator | 2025-09-23 07:52:43.988251 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-09-23 07:52:43.988262 | orchestrator | Tuesday 23 September 2025 07:50:49 +0000 (0:00:02.561) 0:01:38.213 ***** 2025-09-23 07:52:43.988272 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-23 07:52:43.988283 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-23 07:52:43.988294 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:52:43.988304 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:52:43.988315 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-23 07:52:43.988326 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:52:43.988344 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-23 07:52:43.988356 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:52:43.988376 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-23 07:52:43.988388 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:52:43.988398 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-23 07:52:43.988409 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:52:43.988419 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-23 07:52:43.988430 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:52:43.988441 | orchestrator | 2025-09-23 07:52:43.988451 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-09-23 07:52:43.988462 | orchestrator | Tuesday 23 September 2025 07:50:51 +0000 (0:00:02.657) 0:01:40.870 ***** 2025-09-23 07:52:43.988473 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-23 07:52:43.988484 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:52:43.988495 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-23 07:52:43.988515 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:52:43.988525 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-23 07:52:43.988536 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:52:43.988547 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-23 07:52:43.988557 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:52:43.988568 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-23 07:52:43.988579 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:52:43.988589 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-09-23 07:52:43.988600 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-23 07:52:43.988611 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:52:43.988621 | orchestrator | 2025-09-23 07:52:43.988632 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-09-23 07:52:43.988642 | orchestrator | Tuesday 23 September 2025 07:50:53 +0000 (0:00:02.207) 0:01:43.077 ***** 2025-09-23 07:52:43.988653 | orchestrator | [WARNING]: Skipped 2025-09-23 07:52:43.988663 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-09-23 07:52:43.988674 | orchestrator | due to this access issue: 2025-09-23 07:52:43.988685 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-09-23 07:52:43.988695 | orchestrator | not a directory 2025-09-23 07:52:43.988706 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-23 07:52:43.988716 | orchestrator | 2025-09-23 07:52:43.988727 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-09-23 07:52:43.988738 | orchestrator | Tuesday 23 September 2025 07:50:55 +0000 (0:00:01.237) 0:01:44.315 ***** 2025-09-23 07:52:43.988749 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:52:43.988759 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:52:43.988770 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:52:43.988780 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:52:43.988791 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:52:43.988802 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:52:43.988812 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:52:43.988823 | orchestrator | 2025-09-23 07:52:43.988834 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-09-23 07:52:43.988845 | orchestrator | Tuesday 23 September 2025 07:50:56 +0000 (0:00:01.037) 0:01:45.353 ***** 2025-09-23 07:52:43.988855 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:52:43.988866 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:52:43.988876 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:52:43.988887 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:52:43.988897 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:52:43.988908 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:52:43.988918 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:52:43.988928 | orchestrator | 2025-09-23 07:52:43.988945 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-09-23 07:52:43.988964 | orchestrator | Tuesday 23 September 2025 07:50:57 +0000 (0:00:01.315) 0:01:46.668 ***** 2025-09-23 07:52:43.989042 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-23 07:52:43.989100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-23 07:52:43.989123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-23 07:52:43.989144 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-23 07:52:43.989164 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-23 07:52:43.989179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-23 07:52:43.989190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.989202 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-23 07:52:43.989222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.989247 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.989260 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-23 07:52:43.989271 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.989282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.989294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.989306 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-23 07:52:43.989328 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.989350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.989363 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.989374 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.989386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.989397 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.989408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.989453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.989465 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.989487 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.989499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.989510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.989521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-23 07:52:43.989532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-23 07:52:43.989543 | orchestrator | 2025-09-23 07:52:43.989554 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-09-23 07:52:43.989565 | orchestrator | Tuesday 23 September 2025 07:51:03 +0000 (0:00:05.554) 0:01:52.223 ***** 2025-09-23 07:52:43.989576 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-23 07:52:43.989599 | orchestrator | skipping: [testbed-manager] 2025-09-23 07:52:43.989609 | orchestrator | 2025-09-23 07:52:43.989620 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-23 07:52:43.989631 | orchestrator | Tuesday 23 September 2025 07:51:04 +0000 (0:00:01.916) 0:01:54.139 ***** 2025-09-23 07:52:43.989641 | orchestrator | 2025-09-23 07:52:43.989652 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-23 07:52:43.989663 | orchestrator | Tuesday 23 September 2025 07:51:05 +0000 (0:00:00.077) 0:01:54.217 ***** 2025-09-23 07:52:43.989673 | orchestrator | 2025-09-23 07:52:43.989684 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-23 07:52:43.989694 | orchestrator | Tuesday 23 September 2025 07:51:05 +0000 (0:00:00.050) 0:01:54.267 ***** 2025-09-23 07:52:43.989706 | orchestrator | 2025-09-23 07:52:43.989716 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-23 07:52:43.989727 | orchestrator | Tuesday 23 September 2025 07:51:05 +0000 (0:00:00.051) 0:01:54.319 ***** 2025-09-23 07:52:43.989738 | orchestrator | 2025-09-23 07:52:43.989748 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-23 07:52:43.989759 | orchestrator | Tuesday 23 September 2025 07:51:05 +0000 (0:00:00.161) 0:01:54.481 ***** 2025-09-23 07:52:43.989770 | orchestrator | 2025-09-23 07:52:43.989780 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-23 07:52:43.989791 | orchestrator | Tuesday 23 September 2025 07:51:05 +0000 (0:00:00.055) 0:01:54.537 ***** 2025-09-23 07:52:43.989801 | orchestrator | 2025-09-23 07:52:43.989812 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-23 07:52:43.989822 | orchestrator | Tuesday 23 September 2025 07:51:05 +0000 (0:00:00.104) 0:01:54.641 ***** 2025-09-23 07:52:43.989833 | orchestrator | 2025-09-23 07:52:43.989843 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-09-23 07:52:43.989854 | orchestrator | Tuesday 23 September 2025 07:51:05 +0000 (0:00:00.160) 0:01:54.802 ***** 2025-09-23 07:52:43.989869 | orchestrator | changed: [testbed-manager] 2025-09-23 07:52:43.989880 | orchestrator | 2025-09-23 07:52:43.989891 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-09-23 07:52:43.989907 | orchestrator | Tuesday 23 September 2025 07:51:18 +0000 (0:00:13.260) 0:02:08.063 ***** 2025-09-23 07:52:43.989918 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:52:43.989928 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:52:43.989939 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:52:43.989950 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:52:43.989960 | orchestrator | changed: [testbed-manager] 2025-09-23 07:52:43.989971 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:52:43.990210 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:52:43.990232 | orchestrator | 2025-09-23 07:52:43.990251 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-09-23 07:52:43.990269 | orchestrator | Tuesday 23 September 2025 07:51:34 +0000 (0:00:15.710) 0:02:23.773 ***** 2025-09-23 07:52:43.990280 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:52:43.990291 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:52:43.990301 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:52:43.990312 | orchestrator | 2025-09-23 07:52:43.990322 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-09-23 07:52:43.990333 | orchestrator | Tuesday 23 September 2025 07:51:46 +0000 (0:00:11.736) 0:02:35.509 ***** 2025-09-23 07:52:43.990344 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:52:43.990354 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:52:43.990365 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:52:43.990375 | orchestrator | 2025-09-23 07:52:43.990386 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-09-23 07:52:43.990396 | orchestrator | Tuesday 23 September 2025 07:51:57 +0000 (0:00:11.160) 0:02:46.670 ***** 2025-09-23 07:52:43.990407 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:52:43.990417 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:52:43.990440 | orchestrator | changed: [testbed-manager] 2025-09-23 07:52:43.990450 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:52:43.990460 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:52:43.990471 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:52:43.990481 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:52:43.990493 | orchestrator | 2025-09-23 07:52:43.990502 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-09-23 07:52:43.990512 | orchestrator | Tuesday 23 September 2025 07:52:12 +0000 (0:00:15.233) 0:03:01.903 ***** 2025-09-23 07:52:43.990521 | orchestrator | changed: [testbed-manager] 2025-09-23 07:52:43.990531 | orchestrator | 2025-09-23 07:52:43.990540 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-09-23 07:52:43.990550 | orchestrator | Tuesday 23 September 2025 07:52:21 +0000 (0:00:08.784) 0:03:10.688 ***** 2025-09-23 07:52:43.990559 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:52:43.990569 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:52:43.990578 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:52:43.990588 | orchestrator | 2025-09-23 07:52:43.990597 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-09-23 07:52:43.990607 | orchestrator | Tuesday 23 September 2025 07:52:26 +0000 (0:00:04.787) 0:03:15.475 ***** 2025-09-23 07:52:43.990617 | orchestrator | changed: [testbed-manager] 2025-09-23 07:52:43.990626 | orchestrator | 2025-09-23 07:52:43.990636 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-09-23 07:52:43.990645 | orchestrator | Tuesday 23 September 2025 07:52:30 +0000 (0:00:04.504) 0:03:19.980 ***** 2025-09-23 07:52:43.990655 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:52:43.990664 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:52:43.990674 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:52:43.990683 | orchestrator | 2025-09-23 07:52:43.990693 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:52:43.990702 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-23 07:52:43.990712 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-23 07:52:43.990722 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-23 07:52:43.990732 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-23 07:52:43.990741 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-23 07:52:43.990751 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-23 07:52:43.990760 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-23 07:52:43.990770 | orchestrator | 2025-09-23 07:52:43.990779 | orchestrator | 2025-09-23 07:52:43.990788 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:52:43.990798 | orchestrator | Tuesday 23 September 2025 07:52:40 +0000 (0:00:10.059) 0:03:30.039 ***** 2025-09-23 07:52:43.990807 | orchestrator | =============================================================================== 2025-09-23 07:52:43.990817 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 30.02s 2025-09-23 07:52:43.990826 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 27.30s 2025-09-23 07:52:43.990835 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 15.71s 2025-09-23 07:52:43.990886 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 15.23s 2025-09-23 07:52:43.990898 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 13.26s 2025-09-23 07:52:43.990917 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 11.74s 2025-09-23 07:52:43.990927 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 11.16s 2025-09-23 07:52:43.990937 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.06s 2025-09-23 07:52:43.990946 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.78s 2025-09-23 07:52:43.990956 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.14s 2025-09-23 07:52:43.990965 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.11s 2025-09-23 07:52:43.990995 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.55s 2025-09-23 07:52:43.991005 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 4.79s 2025-09-23 07:52:43.991015 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 4.50s 2025-09-23 07:52:43.991024 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.36s 2025-09-23 07:52:43.991034 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.21s 2025-09-23 07:52:43.991043 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.66s 2025-09-23 07:52:43.991053 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.56s 2025-09-23 07:52:43.991062 | orchestrator | prometheus : include_tasks ---------------------------------------------- 2.36s 2025-09-23 07:52:43.991072 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.26s 2025-09-23 07:52:43.991081 | orchestrator | 2025-09-23 07:52:43 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:52:43.991091 | orchestrator | 2025-09-23 07:52:43 | INFO  | Task 58cacd57-d645-4534-a0f2-51d8f8cf4f83 is in state STARTED 2025-09-23 07:52:43.991100 | orchestrator | 2025-09-23 07:52:43 | INFO  | Task 03a07530-4faf-45ed-867c-2e074d452393 is in state STARTED 2025-09-23 07:52:43.991110 | orchestrator | 2025-09-23 07:52:43 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:52:47.027516 | orchestrator | 2025-09-23 07:52:47 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:52:47.030442 | orchestrator | 2025-09-23 07:52:47 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:52:47.032288 | orchestrator | 2025-09-23 07:52:47 | INFO  | Task 58cacd57-d645-4534-a0f2-51d8f8cf4f83 is in state STARTED 2025-09-23 07:52:47.035031 | orchestrator | 2025-09-23 07:52:47 | INFO  | Task 03a07530-4faf-45ed-867c-2e074d452393 is in state STARTED 2025-09-23 07:52:47.035120 | orchestrator | 2025-09-23 07:52:47 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:52:50.074254 | orchestrator | 2025-09-23 07:52:50 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:52:50.077729 | orchestrator | 2025-09-23 07:52:50 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:52:50.079370 | orchestrator | 2025-09-23 07:52:50 | INFO  | Task 58cacd57-d645-4534-a0f2-51d8f8cf4f83 is in state STARTED 2025-09-23 07:52:50.082243 | orchestrator | 2025-09-23 07:52:50 | INFO  | Task 03a07530-4faf-45ed-867c-2e074d452393 is in state STARTED 2025-09-23 07:52:50.082289 | orchestrator | 2025-09-23 07:52:50 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:52:53.118465 | orchestrator | 2025-09-23 07:52:53 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:52:53.118771 | orchestrator | 2025-09-23 07:52:53 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:52:53.120096 | orchestrator | 2025-09-23 07:52:53 | INFO  | Task 58cacd57-d645-4534-a0f2-51d8f8cf4f83 is in state STARTED 2025-09-23 07:52:53.121171 | orchestrator | 2025-09-23 07:52:53 | INFO  | Task 03a07530-4faf-45ed-867c-2e074d452393 is in state STARTED 2025-09-23 07:52:53.121194 | orchestrator | 2025-09-23 07:52:53 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:52:56.160621 | orchestrator | 2025-09-23 07:52:56 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:52:56.162160 | orchestrator | 2025-09-23 07:52:56 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:52:56.163893 | orchestrator | 2025-09-23 07:52:56 | INFO  | Task 58cacd57-d645-4534-a0f2-51d8f8cf4f83 is in state STARTED 2025-09-23 07:52:56.165491 | orchestrator | 2025-09-23 07:52:56 | INFO  | Task 03a07530-4faf-45ed-867c-2e074d452393 is in state STARTED 2025-09-23 07:52:56.165653 | orchestrator | 2025-09-23 07:52:56 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:52:59.207459 | orchestrator | 2025-09-23 07:52:59 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:52:59.207546 | orchestrator | 2025-09-23 07:52:59 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:52:59.207560 | orchestrator | 2025-09-23 07:52:59 | INFO  | Task 58cacd57-d645-4534-a0f2-51d8f8cf4f83 is in state STARTED 2025-09-23 07:52:59.207734 | orchestrator | 2025-09-23 07:52:59 | INFO  | Task 03a07530-4faf-45ed-867c-2e074d452393 is in state STARTED 2025-09-23 07:52:59.207948 | orchestrator | 2025-09-23 07:52:59 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:53:02.248829 | orchestrator | 2025-09-23 07:53:02 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:53:02.250244 | orchestrator | 2025-09-23 07:53:02 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:53:02.251390 | orchestrator | 2025-09-23 07:53:02 | INFO  | Task 58cacd57-d645-4534-a0f2-51d8f8cf4f83 is in state STARTED 2025-09-23 07:53:02.252677 | orchestrator | 2025-09-23 07:53:02 | INFO  | Task 03a07530-4faf-45ed-867c-2e074d452393 is in state STARTED 2025-09-23 07:53:02.252803 | orchestrator | 2025-09-23 07:53:02 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:53:05.296194 | orchestrator | 2025-09-23 07:53:05 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:53:05.299711 | orchestrator | 2025-09-23 07:53:05 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:53:05.302807 | orchestrator | 2025-09-23 07:53:05 | INFO  | Task 58cacd57-d645-4534-a0f2-51d8f8cf4f83 is in state STARTED 2025-09-23 07:53:05.304198 | orchestrator | 2025-09-23 07:53:05 | INFO  | Task 03a07530-4faf-45ed-867c-2e074d452393 is in state STARTED 2025-09-23 07:53:05.304835 | orchestrator | 2025-09-23 07:53:05 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:53:08.348555 | orchestrator | 2025-09-23 07:53:08 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:53:08.350055 | orchestrator | 2025-09-23 07:53:08 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:53:08.351992 | orchestrator | 2025-09-23 07:53:08 | INFO  | Task 58cacd57-d645-4534-a0f2-51d8f8cf4f83 is in state STARTED 2025-09-23 07:53:08.354241 | orchestrator | 2025-09-23 07:53:08 | INFO  | Task 03a07530-4faf-45ed-867c-2e074d452393 is in state STARTED 2025-09-23 07:53:08.354633 | orchestrator | 2025-09-23 07:53:08 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:53:11.394242 | orchestrator | 2025-09-23 07:53:11 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:53:11.395179 | orchestrator | 2025-09-23 07:53:11 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:53:11.396136 | orchestrator | 2025-09-23 07:53:11 | INFO  | Task 58cacd57-d645-4534-a0f2-51d8f8cf4f83 is in state STARTED 2025-09-23 07:53:11.398367 | orchestrator | 2025-09-23 07:53:11 | INFO  | Task 03a07530-4faf-45ed-867c-2e074d452393 is in state STARTED 2025-09-23 07:53:11.398404 | orchestrator | 2025-09-23 07:53:11 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:53:14.446496 | orchestrator | 2025-09-23 07:53:14 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:53:14.448394 | orchestrator | 2025-09-23 07:53:14 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:53:14.449769 | orchestrator | 2025-09-23 07:53:14 | INFO  | Task 58cacd57-d645-4534-a0f2-51d8f8cf4f83 is in state STARTED 2025-09-23 07:53:14.451661 | orchestrator | 2025-09-23 07:53:14 | INFO  | Task 03a07530-4faf-45ed-867c-2e074d452393 is in state STARTED 2025-09-23 07:53:14.451877 | orchestrator | 2025-09-23 07:53:14 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:53:17.499394 | orchestrator | 2025-09-23 07:53:17 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:53:17.501457 | orchestrator | 2025-09-23 07:53:17 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:53:17.503312 | orchestrator | 2025-09-23 07:53:17 | INFO  | Task 58cacd57-d645-4534-a0f2-51d8f8cf4f83 is in state STARTED 2025-09-23 07:53:17.505903 | orchestrator | 2025-09-23 07:53:17 | INFO  | Task 03a07530-4faf-45ed-867c-2e074d452393 is in state STARTED 2025-09-23 07:53:17.506634 | orchestrator | 2025-09-23 07:53:17 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:53:20.567417 | orchestrator | 2025-09-23 07:53:20 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:53:20.571997 | orchestrator | 2025-09-23 07:53:20 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:53:20.576017 | orchestrator | 2025-09-23 07:53:20 | INFO  | Task 58cacd57-d645-4534-a0f2-51d8f8cf4f83 is in state STARTED 2025-09-23 07:53:20.581246 | orchestrator | 2025-09-23 07:53:20 | INFO  | Task 03a07530-4faf-45ed-867c-2e074d452393 is in state STARTED 2025-09-23 07:53:20.581310 | orchestrator | 2025-09-23 07:53:20 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:53:23.618495 | orchestrator | 2025-09-23 07:53:23 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:53:23.619450 | orchestrator | 2025-09-23 07:53:23 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:53:23.620753 | orchestrator | 2025-09-23 07:53:23 | INFO  | Task 58cacd57-d645-4534-a0f2-51d8f8cf4f83 is in state STARTED 2025-09-23 07:53:23.621821 | orchestrator | 2025-09-23 07:53:23 | INFO  | Task 03a07530-4faf-45ed-867c-2e074d452393 is in state STARTED 2025-09-23 07:53:23.621845 | orchestrator | 2025-09-23 07:53:23 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:53:26.659098 | orchestrator | 2025-09-23 07:53:26 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:53:26.661081 | orchestrator | 2025-09-23 07:53:26 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:53:26.664198 | orchestrator | 2025-09-23 07:53:26 | INFO  | Task 58cacd57-d645-4534-a0f2-51d8f8cf4f83 is in state STARTED 2025-09-23 07:53:26.666392 | orchestrator | 2025-09-23 07:53:26 | INFO  | Task 03a07530-4faf-45ed-867c-2e074d452393 is in state STARTED 2025-09-23 07:53:26.667112 | orchestrator | 2025-09-23 07:53:26 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:53:29.714124 | orchestrator | 2025-09-23 07:53:29 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:53:29.714752 | orchestrator | 2025-09-23 07:53:29 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:53:29.716338 | orchestrator | 2025-09-23 07:53:29 | INFO  | Task 58cacd57-d645-4534-a0f2-51d8f8cf4f83 is in state STARTED 2025-09-23 07:53:29.717781 | orchestrator | 2025-09-23 07:53:29 | INFO  | Task 03a07530-4faf-45ed-867c-2e074d452393 is in state STARTED 2025-09-23 07:53:29.717985 | orchestrator | 2025-09-23 07:53:29 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:53:32.756786 | orchestrator | 2025-09-23 07:53:32 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:53:32.757034 | orchestrator | 2025-09-23 07:53:32 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:53:32.758687 | orchestrator | 2025-09-23 07:53:32 | INFO  | Task 58cacd57-d645-4534-a0f2-51d8f8cf4f83 is in state STARTED 2025-09-23 07:53:32.759287 | orchestrator | 2025-09-23 07:53:32 | INFO  | Task 03a07530-4faf-45ed-867c-2e074d452393 is in state STARTED 2025-09-23 07:53:32.759394 | orchestrator | 2025-09-23 07:53:32 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:53:35.804712 | orchestrator | 2025-09-23 07:53:35 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:53:35.805763 | orchestrator | 2025-09-23 07:53:35 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:53:35.807653 | orchestrator | 2025-09-23 07:53:35 | INFO  | Task 58cacd57-d645-4534-a0f2-51d8f8cf4f83 is in state STARTED 2025-09-23 07:53:35.811069 | orchestrator | 2025-09-23 07:53:35 | INFO  | Task 03a07530-4faf-45ed-867c-2e074d452393 is in state STARTED 2025-09-23 07:53:35.811493 | orchestrator | 2025-09-23 07:53:35 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:53:38.848019 | orchestrator | 2025-09-23 07:53:38 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:53:38.849187 | orchestrator | 2025-09-23 07:53:38 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:53:38.850145 | orchestrator | 2025-09-23 07:53:38 | INFO  | Task 58cacd57-d645-4534-a0f2-51d8f8cf4f83 is in state SUCCESS 2025-09-23 07:53:38.851817 | orchestrator | 2025-09-23 07:53:38.851851 | orchestrator | 2025-09-23 07:53:38.851863 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-23 07:53:38.851875 | orchestrator | 2025-09-23 07:53:38.851887 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-23 07:53:38.851898 | orchestrator | Tuesday 23 September 2025 07:52:32 +0000 (0:00:00.245) 0:00:00.245 ***** 2025-09-23 07:53:38.851910 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:53:38.851945 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:53:38.851956 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:53:38.851967 | orchestrator | 2025-09-23 07:53:38.851996 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-23 07:53:38.852007 | orchestrator | Tuesday 23 September 2025 07:52:33 +0000 (0:00:00.281) 0:00:00.527 ***** 2025-09-23 07:53:38.852019 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-09-23 07:53:38.852030 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-09-23 07:53:38.852041 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-09-23 07:53:38.852052 | orchestrator | 2025-09-23 07:53:38.852063 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-09-23 07:53:38.852096 | orchestrator | 2025-09-23 07:53:38.852108 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-23 07:53:38.852118 | orchestrator | Tuesday 23 September 2025 07:52:33 +0000 (0:00:00.383) 0:00:00.910 ***** 2025-09-23 07:53:38.852134 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:53:38.852147 | orchestrator | 2025-09-23 07:53:38.852158 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-09-23 07:53:38.852168 | orchestrator | Tuesday 23 September 2025 07:52:33 +0000 (0:00:00.494) 0:00:01.404 ***** 2025-09-23 07:53:38.852180 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-09-23 07:53:38.852191 | orchestrator | 2025-09-23 07:53:38.852201 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-09-23 07:53:38.852212 | orchestrator | Tuesday 23 September 2025 07:52:37 +0000 (0:00:03.554) 0:00:04.959 ***** 2025-09-23 07:53:38.852223 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-09-23 07:53:38.852234 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-09-23 07:53:38.852268 | orchestrator | 2025-09-23 07:53:38.852280 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-09-23 07:53:38.852291 | orchestrator | Tuesday 23 September 2025 07:52:43 +0000 (0:00:06.448) 0:00:11.407 ***** 2025-09-23 07:53:38.852302 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-23 07:53:38.852313 | orchestrator | 2025-09-23 07:53:38.852324 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-09-23 07:53:38.852335 | orchestrator | Tuesday 23 September 2025 07:52:47 +0000 (0:00:03.466) 0:00:14.874 ***** 2025-09-23 07:53:38.852345 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-23 07:53:38.852356 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-09-23 07:53:38.852367 | orchestrator | 2025-09-23 07:53:38.852378 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-09-23 07:53:38.852388 | orchestrator | Tuesday 23 September 2025 07:52:51 +0000 (0:00:03.839) 0:00:18.713 ***** 2025-09-23 07:53:38.852399 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-23 07:53:38.852412 | orchestrator | 2025-09-23 07:53:38.852424 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-09-23 07:53:38.852437 | orchestrator | Tuesday 23 September 2025 07:52:54 +0000 (0:00:03.320) 0:00:22.033 ***** 2025-09-23 07:53:38.852449 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-09-23 07:53:38.852461 | orchestrator | 2025-09-23 07:53:38.852473 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-23 07:53:38.852485 | orchestrator | Tuesday 23 September 2025 07:52:58 +0000 (0:00:04.150) 0:00:26.184 ***** 2025-09-23 07:53:38.852498 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:53:38.852509 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:53:38.852522 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:53:38.852533 | orchestrator | 2025-09-23 07:53:38.852545 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-09-23 07:53:38.852557 | orchestrator | Tuesday 23 September 2025 07:52:59 +0000 (0:00:00.255) 0:00:26.440 ***** 2025-09-23 07:53:38.852574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-23 07:53:38.852618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-23 07:53:38.852633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-23 07:53:38.852645 | orchestrator | 2025-09-23 07:53:38.852657 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-09-23 07:53:38.852669 | orchestrator | Tuesday 23 September 2025 07:52:59 +0000 (0:00:00.740) 0:00:27.180 ***** 2025-09-23 07:53:38.852682 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:53:38.852694 | orchestrator | 2025-09-23 07:53:38.852707 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-09-23 07:53:38.852719 | orchestrator | Tuesday 23 September 2025 07:52:59 +0000 (0:00:00.124) 0:00:27.305 ***** 2025-09-23 07:53:38.852732 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:53:38.852744 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:53:38.852756 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:53:38.852767 | orchestrator | 2025-09-23 07:53:38.852778 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-23 07:53:38.852789 | orchestrator | Tuesday 23 September 2025 07:53:00 +0000 (0:00:00.387) 0:00:27.693 ***** 2025-09-23 07:53:38.852799 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:53:38.852810 | orchestrator | 2025-09-23 07:53:38.852821 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-09-23 07:53:38.852831 | orchestrator | Tuesday 23 September 2025 07:53:00 +0000 (0:00:00.482) 0:00:28.175 ***** 2025-09-23 07:53:38.852843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-23 07:53:38.852874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-23 07:53:38.852891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-23 07:53:38.852903 | orchestrator | 2025-09-23 07:53:38.852933 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-09-23 07:53:38.852945 | orchestrator | Tuesday 23 September 2025 07:53:02 +0000 (0:00:01.368) 0:00:29.544 ***** 2025-09-23 07:53:38.852956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-23 07:53:38.852968 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:53:38.852979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-23 07:53:38.852997 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:53:38.853014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-23 07:53:38.853026 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:53:38.853037 | orchestrator | 2025-09-23 07:53:38.853048 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-09-23 07:53:38.853058 | orchestrator | Tuesday 23 September 2025 07:53:02 +0000 (0:00:00.773) 0:00:30.317 ***** 2025-09-23 07:53:38.853074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-23 07:53:38.853086 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:53:38.853097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-23 07:53:38.853108 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:53:38.853119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-23 07:53:38.853137 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:53:38.853148 | orchestrator | 2025-09-23 07:53:38.853159 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-09-23 07:53:38.853169 | orchestrator | Tuesday 23 September 2025 07:53:03 +0000 (0:00:00.709) 0:00:31.027 ***** 2025-09-23 07:53:38.853188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-23 07:53:38.853205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-23 07:53:38.853217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-23 07:53:38.853228 | orchestrator | 2025-09-23 07:53:38.853239 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-09-23 07:53:38.853249 | orchestrator | Tuesday 23 September 2025 07:53:05 +0000 (0:00:01.397) 0:00:32.424 ***** 2025-09-23 07:53:38.853260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-23 07:53:38.853288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-23 07:53:38.853312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-23 07:53:38.853324 | orchestrator | 2025-09-23 07:53:38.853334 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-09-23 07:53:38.853345 | orchestrator | Tuesday 23 September 2025 07:53:07 +0000 (0:00:02.473) 0:00:34.898 ***** 2025-09-23 07:53:38.853356 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-23 07:53:38.853367 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-23 07:53:38.853378 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-23 07:53:38.853388 | orchestrator | 2025-09-23 07:53:38.853399 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-09-23 07:53:38.853410 | orchestrator | Tuesday 23 September 2025 07:53:09 +0000 (0:00:01.858) 0:00:36.757 ***** 2025-09-23 07:53:38.853421 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:53:38.853431 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:53:38.853442 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:53:38.853452 | orchestrator | 2025-09-23 07:53:38.853463 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-09-23 07:53:38.853475 | orchestrator | Tuesday 23 September 2025 07:53:10 +0000 (0:00:01.598) 0:00:38.355 ***** 2025-09-23 07:53:38.853493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-23 07:53:38.853522 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:53:38.853540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-23 07:53:38.853559 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:53:38.853587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-23 07:53:38.853603 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:53:38.853614 | orchestrator | 2025-09-23 07:53:38.853625 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-09-23 07:53:38.853641 | orchestrator | Tuesday 23 September 2025 07:53:11 +0000 (0:00:00.698) 0:00:39.054 ***** 2025-09-23 07:53:38.853652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-23 07:53:38.853664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-23 07:53:38.853683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-23 07:53:38.853694 | orchestrator | 2025-09-23 07:53:38.853705 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-09-23 07:53:38.853715 | orchestrator | Tuesday 23 September 2025 07:53:12 +0000 (0:00:01.337) 0:00:40.392 ***** 2025-09-23 07:53:38.853726 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:53:38.853737 | orchestrator | 2025-09-23 07:53:38.853747 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-09-23 07:53:38.853758 | orchestrator | Tuesday 23 September 2025 07:53:15 +0000 (0:00:02.440) 0:00:42.832 ***** 2025-09-23 07:53:38.853768 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:53:38.853779 | orchestrator | 2025-09-23 07:53:38.853790 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-09-23 07:53:38.853801 | orchestrator | Tuesday 23 September 2025 07:53:17 +0000 (0:00:01.806) 0:00:44.639 ***** 2025-09-23 07:53:38.853811 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:53:38.853822 | orchestrator | 2025-09-23 07:53:38.853832 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-23 07:53:38.853843 | orchestrator | Tuesday 23 September 2025 07:53:31 +0000 (0:00:14.357) 0:00:58.996 ***** 2025-09-23 07:53:38.853854 | orchestrator | 2025-09-23 07:53:38.853864 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-23 07:53:38.853875 | orchestrator | Tuesday 23 September 2025 07:53:31 +0000 (0:00:00.072) 0:00:59.069 ***** 2025-09-23 07:53:38.853886 | orchestrator | 2025-09-23 07:53:38.853902 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-23 07:53:38.853913 | orchestrator | Tuesday 23 September 2025 07:53:31 +0000 (0:00:00.070) 0:00:59.139 ***** 2025-09-23 07:53:38.853952 | orchestrator | 2025-09-23 07:53:38.853962 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-09-23 07:53:38.853973 | orchestrator | Tuesday 23 September 2025 07:53:31 +0000 (0:00:00.084) 0:00:59.224 ***** 2025-09-23 07:53:38.853984 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:53:38.853995 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:53:38.854011 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:53:38.854078 | orchestrator | 2025-09-23 07:53:38.854090 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:53:38.854102 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-23 07:53:38.854122 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-23 07:53:38.854133 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-23 07:53:38.854144 | orchestrator | 2025-09-23 07:53:38.854155 | orchestrator | 2025-09-23 07:53:38.854166 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:53:38.854176 | orchestrator | Tuesday 23 September 2025 07:53:37 +0000 (0:00:05.764) 0:01:04.988 ***** 2025-09-23 07:53:38.854187 | orchestrator | =============================================================================== 2025-09-23 07:53:38.854198 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.36s 2025-09-23 07:53:38.854209 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.45s 2025-09-23 07:53:38.854219 | orchestrator | placement : Restart placement-api container ----------------------------- 5.76s 2025-09-23 07:53:38.854230 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.15s 2025-09-23 07:53:38.854241 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.84s 2025-09-23 07:53:38.854251 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.55s 2025-09-23 07:53:38.854262 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.47s 2025-09-23 07:53:38.854273 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.32s 2025-09-23 07:53:38.854283 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.47s 2025-09-23 07:53:38.854294 | orchestrator | placement : Creating placement databases -------------------------------- 2.44s 2025-09-23 07:53:38.854305 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.86s 2025-09-23 07:53:38.854316 | orchestrator | placement : Creating placement databases user and setting permissions --- 1.81s 2025-09-23 07:53:38.854326 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.60s 2025-09-23 07:53:38.854337 | orchestrator | placement : Copying over config.json files for services ----------------- 1.40s 2025-09-23 07:53:38.854347 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.37s 2025-09-23 07:53:38.854358 | orchestrator | placement : Check placement containers ---------------------------------- 1.34s 2025-09-23 07:53:38.854369 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.77s 2025-09-23 07:53:38.854379 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.74s 2025-09-23 07:53:38.854390 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.71s 2025-09-23 07:53:38.854400 | orchestrator | placement : Copying over existing policy file --------------------------- 0.70s 2025-09-23 07:53:38.854411 | orchestrator | 2025-09-23 07:53:38 | INFO  | Task 03a07530-4faf-45ed-867c-2e074d452393 is in state STARTED 2025-09-23 07:53:38.854422 | orchestrator | 2025-09-23 07:53:38 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:53:41.896160 | orchestrator | 2025-09-23 07:53:41 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:53:41.897287 | orchestrator | 2025-09-23 07:53:41 | INFO  | Task 9a256c78-cd4b-4275-bb25-dd2e2fb0be90 is in state STARTED 2025-09-23 07:53:41.897993 | orchestrator | 2025-09-23 07:53:41 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:53:41.898733 | orchestrator | 2025-09-23 07:53:41 | INFO  | Task 03a07530-4faf-45ed-867c-2e074d452393 is in state STARTED 2025-09-23 07:53:41.898829 | orchestrator | 2025-09-23 07:53:41 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:53:44.934405 | orchestrator | 2025-09-23 07:53:44 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:53:44.935610 | orchestrator | 2025-09-23 07:53:44 | INFO  | Task 9a256c78-cd4b-4275-bb25-dd2e2fb0be90 is in state STARTED 2025-09-23 07:53:44.936409 | orchestrator | 2025-09-23 07:53:44 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:53:44.937335 | orchestrator | 2025-09-23 07:53:44 | INFO  | Task 03a07530-4faf-45ed-867c-2e074d452393 is in state STARTED 2025-09-23 07:53:44.937376 | orchestrator | 2025-09-23 07:53:44 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:53:47.988292 | orchestrator | 2025-09-23 07:53:47 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:53:47.990185 | orchestrator | 2025-09-23 07:53:47 | INFO  | Task 9a256c78-cd4b-4275-bb25-dd2e2fb0be90 is in state STARTED 2025-09-23 07:53:47.991618 | orchestrator | 2025-09-23 07:53:47 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:53:47.992885 | orchestrator | 2025-09-23 07:53:47 | INFO  | Task 03a07530-4faf-45ed-867c-2e074d452393 is in state STARTED 2025-09-23 07:53:47.993128 | orchestrator | 2025-09-23 07:53:47 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:53:51.039772 | orchestrator | 2025-09-23 07:53:51 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:53:51.043437 | orchestrator | 2025-09-23 07:53:51 | INFO  | Task 9a256c78-cd4b-4275-bb25-dd2e2fb0be90 is in state STARTED 2025-09-23 07:53:51.047575 | orchestrator | 2025-09-23 07:53:51 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:53:51.051060 | orchestrator | 2025-09-23 07:53:51 | INFO  | Task 03a07530-4faf-45ed-867c-2e074d452393 is in state STARTED 2025-09-23 07:53:51.052047 | orchestrator | 2025-09-23 07:53:51 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:53:54.091064 | orchestrator | 2025-09-23 07:53:54 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:53:54.094834 | orchestrator | 2025-09-23 07:53:54 | INFO  | Task 9a256c78-cd4b-4275-bb25-dd2e2fb0be90 is in state STARTED 2025-09-23 07:53:54.097855 | orchestrator | 2025-09-23 07:53:54 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:53:54.101604 | orchestrator | 2025-09-23 07:53:54 | INFO  | Task 03a07530-4faf-45ed-867c-2e074d452393 is in state STARTED 2025-09-23 07:53:54.101675 | orchestrator | 2025-09-23 07:53:54 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:53:57.153562 | orchestrator | 2025-09-23 07:53:57 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:53:57.154577 | orchestrator | 2025-09-23 07:53:57 | INFO  | Task 9a256c78-cd4b-4275-bb25-dd2e2fb0be90 is in state STARTED 2025-09-23 07:53:57.158409 | orchestrator | 2025-09-23 07:53:57 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:53:57.159762 | orchestrator | 2025-09-23 07:53:57 | INFO  | Task 03a07530-4faf-45ed-867c-2e074d452393 is in state STARTED 2025-09-23 07:53:57.159855 | orchestrator | 2025-09-23 07:53:57 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:54:00.220673 | orchestrator | 2025-09-23 07:54:00 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:54:00.221640 | orchestrator | 2025-09-23 07:54:00 | INFO  | Task 9a256c78-cd4b-4275-bb25-dd2e2fb0be90 is in state STARTED 2025-09-23 07:54:00.222834 | orchestrator | 2025-09-23 07:54:00 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:54:00.223846 | orchestrator | 2025-09-23 07:54:00 | INFO  | Task 03a07530-4faf-45ed-867c-2e074d452393 is in state STARTED 2025-09-23 07:54:00.223968 | orchestrator | 2025-09-23 07:54:00 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:54:03.268633 | orchestrator | 2025-09-23 07:54:03 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:54:03.268734 | orchestrator | 2025-09-23 07:54:03 | INFO  | Task 9a256c78-cd4b-4275-bb25-dd2e2fb0be90 is in state STARTED 2025-09-23 07:54:03.270857 | orchestrator | 2025-09-23 07:54:03 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state STARTED 2025-09-23 07:54:03.272802 | orchestrator | 2025-09-23 07:54:03 | INFO  | Task 03a07530-4faf-45ed-867c-2e074d452393 is in state STARTED 2025-09-23 07:54:03.272835 | orchestrator | 2025-09-23 07:54:03 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:54:06.329779 | orchestrator | 2025-09-23 07:54:06 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:54:06.331508 | orchestrator | 2025-09-23 07:54:06 | INFO  | Task 9a256c78-cd4b-4275-bb25-dd2e2fb0be90 is in state STARTED 2025-09-23 07:54:06.335175 | orchestrator | 2025-09-23 07:54:06 | INFO  | Task 68d644df-f47e-478d-96ea-417fcd59b84c is in state SUCCESS 2025-09-23 07:54:06.335524 | orchestrator | 2025-09-23 07:54:06.338725 | orchestrator | 2025-09-23 07:54:06.338765 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-23 07:54:06.338777 | orchestrator | 2025-09-23 07:54:06.338789 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-23 07:54:06.338800 | orchestrator | Tuesday 23 September 2025 07:49:19 +0000 (0:00:00.310) 0:00:00.310 ***** 2025-09-23 07:54:06.338811 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:54:06.338823 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:54:06.338834 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:54:06.338848 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:54:06.338867 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:54:06.338924 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:54:06.338938 | orchestrator | 2025-09-23 07:54:06.338949 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-23 07:54:06.338980 | orchestrator | Tuesday 23 September 2025 07:49:20 +0000 (0:00:01.040) 0:00:01.351 ***** 2025-09-23 07:54:06.338992 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-09-23 07:54:06.339002 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-09-23 07:54:06.339013 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-09-23 07:54:06.339024 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-09-23 07:54:06.339034 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-09-23 07:54:06.339045 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-09-23 07:54:06.339055 | orchestrator | 2025-09-23 07:54:06.339066 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-09-23 07:54:06.339076 | orchestrator | 2025-09-23 07:54:06.339087 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-23 07:54:06.339097 | orchestrator | Tuesday 23 September 2025 07:49:21 +0000 (0:00:00.692) 0:00:02.043 ***** 2025-09-23 07:54:06.339109 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:54:06.339276 | orchestrator | 2025-09-23 07:54:06.339294 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-09-23 07:54:06.339314 | orchestrator | Tuesday 23 September 2025 07:49:22 +0000 (0:00:01.474) 0:00:03.518 ***** 2025-09-23 07:54:06.339333 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:54:06.339346 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:54:06.339359 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:54:06.339371 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:54:06.339384 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:54:06.339396 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:54:06.339430 | orchestrator | 2025-09-23 07:54:06.339443 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-09-23 07:54:06.339455 | orchestrator | Tuesday 23 September 2025 07:49:24 +0000 (0:00:01.414) 0:00:04.933 ***** 2025-09-23 07:54:06.339468 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:54:06.339481 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:54:06.339493 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:54:06.339505 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:54:06.339517 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:54:06.339529 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:54:06.339541 | orchestrator | 2025-09-23 07:54:06.339554 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-09-23 07:54:06.339567 | orchestrator | Tuesday 23 September 2025 07:49:25 +0000 (0:00:01.103) 0:00:06.036 ***** 2025-09-23 07:54:06.339580 | orchestrator | ok: [testbed-node-0] => { 2025-09-23 07:54:06.339593 | orchestrator |  "changed": false, 2025-09-23 07:54:06.339606 | orchestrator |  "msg": "All assertions passed" 2025-09-23 07:54:06.339618 | orchestrator | } 2025-09-23 07:54:06.339631 | orchestrator | ok: [testbed-node-1] => { 2025-09-23 07:54:06.339644 | orchestrator |  "changed": false, 2025-09-23 07:54:06.339657 | orchestrator |  "msg": "All assertions passed" 2025-09-23 07:54:06.339670 | orchestrator | } 2025-09-23 07:54:06.339682 | orchestrator | ok: [testbed-node-2] => { 2025-09-23 07:54:06.339693 | orchestrator |  "changed": false, 2025-09-23 07:54:06.339704 | orchestrator |  "msg": "All assertions passed" 2025-09-23 07:54:06.339714 | orchestrator | } 2025-09-23 07:54:06.339725 | orchestrator | ok: [testbed-node-3] => { 2025-09-23 07:54:06.339735 | orchestrator |  "changed": false, 2025-09-23 07:54:06.339746 | orchestrator |  "msg": "All assertions passed" 2025-09-23 07:54:06.339757 | orchestrator | } 2025-09-23 07:54:06.339767 | orchestrator | ok: [testbed-node-4] => { 2025-09-23 07:54:06.339777 | orchestrator |  "changed": false, 2025-09-23 07:54:06.339788 | orchestrator |  "msg": "All assertions passed" 2025-09-23 07:54:06.339799 | orchestrator | } 2025-09-23 07:54:06.339809 | orchestrator | ok: [testbed-node-5] => { 2025-09-23 07:54:06.339822 | orchestrator |  "changed": false, 2025-09-23 07:54:06.339948 | orchestrator |  "msg": "All assertions passed" 2025-09-23 07:54:06.339975 | orchestrator | } 2025-09-23 07:54:06.339991 | orchestrator | 2025-09-23 07:54:06.340002 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-09-23 07:54:06.340013 | orchestrator | Tuesday 23 September 2025 07:49:26 +0000 (0:00:00.680) 0:00:06.717 ***** 2025-09-23 07:54:06.340023 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:54:06.340034 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:54:06.340045 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:54:06.340055 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:54:06.340066 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:54:06.340077 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:54:06.340087 | orchestrator | 2025-09-23 07:54:06.340098 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-09-23 07:54:06.340109 | orchestrator | Tuesday 23 September 2025 07:49:26 +0000 (0:00:00.532) 0:00:07.250 ***** 2025-09-23 07:54:06.340120 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-09-23 07:54:06.340130 | orchestrator | 2025-09-23 07:54:06.340141 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-09-23 07:54:06.340152 | orchestrator | Tuesday 23 September 2025 07:49:30 +0000 (0:00:03.577) 0:00:10.827 ***** 2025-09-23 07:54:06.340162 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-09-23 07:54:06.340175 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-09-23 07:54:06.340185 | orchestrator | 2025-09-23 07:54:06.340211 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-09-23 07:54:06.340222 | orchestrator | Tuesday 23 September 2025 07:49:37 +0000 (0:00:07.148) 0:00:17.976 ***** 2025-09-23 07:54:06.340245 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-23 07:54:06.340255 | orchestrator | 2025-09-23 07:54:06.340294 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-09-23 07:54:06.340306 | orchestrator | Tuesday 23 September 2025 07:49:40 +0000 (0:00:03.421) 0:00:21.397 ***** 2025-09-23 07:54:06.340316 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-23 07:54:06.340327 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-09-23 07:54:06.340338 | orchestrator | 2025-09-23 07:54:06.340349 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-09-23 07:54:06.340367 | orchestrator | Tuesday 23 September 2025 07:49:45 +0000 (0:00:04.706) 0:00:26.103 ***** 2025-09-23 07:54:06.340378 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-23 07:54:06.340389 | orchestrator | 2025-09-23 07:54:06.340399 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-09-23 07:54:06.340412 | orchestrator | Tuesday 23 September 2025 07:49:49 +0000 (0:00:03.675) 0:00:29.779 ***** 2025-09-23 07:54:06.340431 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-09-23 07:54:06.340449 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-09-23 07:54:06.340465 | orchestrator | 2025-09-23 07:54:06.340476 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-23 07:54:06.340487 | orchestrator | Tuesday 23 September 2025 07:49:56 +0000 (0:00:07.507) 0:00:37.287 ***** 2025-09-23 07:54:06.340497 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:54:06.340508 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:54:06.340518 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:54:06.340529 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:54:06.340539 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:54:06.340550 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:54:06.340560 | orchestrator | 2025-09-23 07:54:06.340571 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-09-23 07:54:06.340582 | orchestrator | Tuesday 23 September 2025 07:49:57 +0000 (0:00:00.837) 0:00:38.124 ***** 2025-09-23 07:54:06.340592 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:54:06.340603 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:54:06.340613 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:54:06.340624 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:54:06.340634 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:54:06.340645 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:54:06.340656 | orchestrator | 2025-09-23 07:54:06.340666 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-09-23 07:54:06.340677 | orchestrator | Tuesday 23 September 2025 07:49:59 +0000 (0:00:02.414) 0:00:40.539 ***** 2025-09-23 07:54:06.340688 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:54:06.340699 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:54:06.340709 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:54:06.340720 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:54:06.340731 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:54:06.340741 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:54:06.340752 | orchestrator | 2025-09-23 07:54:06.340763 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-23 07:54:06.340773 | orchestrator | Tuesday 23 September 2025 07:50:01 +0000 (0:00:01.190) 0:00:41.730 ***** 2025-09-23 07:54:06.340784 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:54:06.340795 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:54:06.340805 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:54:06.340816 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:54:06.340826 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:54:06.340837 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:54:06.340847 | orchestrator | 2025-09-23 07:54:06.340858 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-09-23 07:54:06.340869 | orchestrator | Tuesday 23 September 2025 07:50:03 +0000 (0:00:02.790) 0:00:44.521 ***** 2025-09-23 07:54:06.340918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-23 07:54:06.340954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-23 07:54:06.340974 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-23 07:54:06.340986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-23 07:54:06.340998 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-23 07:54:06.341017 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-23 07:54:06.341029 | orchestrator | 2025-09-23 07:54:06.341040 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-09-23 07:54:06.341051 | orchestrator | Tuesday 23 September 2025 07:50:07 +0000 (0:00:03.460) 0:00:47.981 ***** 2025-09-23 07:54:06.341062 | orchestrator | [WARNING]: Skipped 2025-09-23 07:54:06.341073 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-09-23 07:54:06.341084 | orchestrator | due to this access issue: 2025-09-23 07:54:06.341095 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-09-23 07:54:06.341106 | orchestrator | a directory 2025-09-23 07:54:06.341117 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-23 07:54:06.341127 | orchestrator | 2025-09-23 07:54:06.341138 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-23 07:54:06.341155 | orchestrator | Tuesday 23 September 2025 07:50:08 +0000 (0:00:01.193) 0:00:49.175 ***** 2025-09-23 07:54:06.341166 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:54:06.341179 | orchestrator | 2025-09-23 07:54:06.341190 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-09-23 07:54:06.341201 | orchestrator | Tuesday 23 September 2025 07:50:10 +0000 (0:00:02.003) 0:00:51.178 ***** 2025-09-23 07:54:06.341217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-23 07:54:06.341230 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-23 07:54:06.341250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-23 07:54:06.341262 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-23 07:54:06.341282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-23 07:54:06.341299 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-23 07:54:06.341310 | orchestrator | 2025-09-23 07:54:06.341321 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-09-23 07:54:06.341332 | orchestrator | Tuesday 23 September 2025 07:50:14 +0000 (0:00:03.977) 0:00:55.156 ***** 2025-09-23 07:54:06.341344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-23 07:54:06.341366 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:54:06.341387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-23 07:54:06.341403 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:54:06.341415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-23 07:54:06.341432 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:54:06.341449 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-23 07:54:06.341461 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:54:06.341472 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-23 07:54:06.341490 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:54:06.341501 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-23 07:54:06.341512 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:54:06.341523 | orchestrator | 2025-09-23 07:54:06.341534 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-09-23 07:54:06.341545 | orchestrator | Tuesday 23 September 2025 07:50:18 +0000 (0:00:03.900) 0:00:59.056 ***** 2025-09-23 07:54:06.341556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-23 07:54:06.341567 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:54:06.341585 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-23 07:54:06.341596 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:54:06.341621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-23 07:54:06.341632 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:54:06.341652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-23 07:54:06.341663 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:54:06.341674 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-23 07:54:06.341685 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:54:06.341696 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-23 07:54:06.341707 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:54:06.341718 | orchestrator | 2025-09-23 07:54:06.341728 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-09-23 07:54:06.341739 | orchestrator | Tuesday 23 September 2025 07:50:21 +0000 (0:00:03.435) 0:01:02.491 ***** 2025-09-23 07:54:06.341750 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:54:06.341760 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:54:06.341771 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:54:06.341781 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:54:06.341792 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:54:06.341802 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:54:06.341817 | orchestrator | 2025-09-23 07:54:06.341836 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-09-23 07:54:06.341862 | orchestrator | Tuesday 23 September 2025 07:50:24 +0000 (0:00:02.877) 0:01:05.369 ***** 2025-09-23 07:54:06.341873 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:54:06.341948 | orchestrator | 2025-09-23 07:54:06.341963 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-09-23 07:54:06.341974 | orchestrator | Tuesday 23 September 2025 07:50:24 +0000 (0:00:00.102) 0:01:05.471 ***** 2025-09-23 07:54:06.341985 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:54:06.341995 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:54:06.342006 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:54:06.342092 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:54:06.342120 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:54:06.342131 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:54:06.342142 | orchestrator | 2025-09-23 07:54:06.342153 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-09-23 07:54:06.342170 | orchestrator | Tuesday 23 September 2025 07:50:25 +0000 (0:00:00.717) 0:01:06.189 ***** 2025-09-23 07:54:06.342181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-23 07:54:06.342193 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:54:06.342204 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-23 07:54:06.342216 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:54:06.342227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-23 07:54:06.342238 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:54:06.342811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-23 07:54:06.342990 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:54:06.343036 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-23 07:54:06.343058 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:54:06.343075 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-23 07:54:06.343120 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:54:06.343137 | orchestrator | 2025-09-23 07:54:06.343157 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-09-23 07:54:06.343198 | orchestrator | Tuesday 23 September 2025 07:50:29 +0000 (0:00:03.551) 0:01:09.740 ***** 2025-09-23 07:54:06.343219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-23 07:54:06.343239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-23 07:54:06.343281 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-23 07:54:06.343324 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-23 07:54:06.343344 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-23 07:54:06.343367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-23 07:54:06.343387 | orchestrator | 2025-09-23 07:54:06.343405 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-09-23 07:54:06.343428 | orchestrator | Tuesday 23 September 2025 07:50:33 +0000 (0:00:04.528) 0:01:14.269 ***** 2025-09-23 07:54:06.343450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-23 07:54:06.343495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-23 07:54:06.343522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-23 07:54:06.343542 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-23 07:54:06.343563 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-23 07:54:06.343583 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-23 07:54:06.343614 | orchestrator | 2025-09-23 07:54:06.343633 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-09-23 07:54:06.343650 | orchestrator | Tuesday 23 September 2025 07:50:40 +0000 (0:00:06.723) 0:01:20.992 ***** 2025-09-23 07:54:06.343710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-23 07:54:06.343732 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:54:06.343752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-23 07:54:06.343770 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:54:06.343789 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-23 07:54:06.343808 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:54:06.343845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-23 07:54:06.343877 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:54:06.343928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-23 07:54:06.343947 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:54:06.343986 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-23 07:54:06.344007 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:54:06.344026 | orchestrator | 2025-09-23 07:54:06.344045 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-09-23 07:54:06.344060 | orchestrator | Tuesday 23 September 2025 07:50:43 +0000 (0:00:03.533) 0:01:24.525 ***** 2025-09-23 07:54:06.344071 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:54:06.344082 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:54:06.344093 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:54:06.344104 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:54:06.344114 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:54:06.344125 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:54:06.344142 | orchestrator | 2025-09-23 07:54:06.344160 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-09-23 07:54:06.344178 | orchestrator | Tuesday 23 September 2025 07:50:47 +0000 (0:00:03.188) 0:01:27.714 ***** 2025-09-23 07:54:06.344197 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-23 07:54:06.344216 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:54:06.344234 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-23 07:54:06.344266 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:54:06.344286 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-23 07:54:06.344307 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:54:06.344362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-23 07:54:06.344395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-23 07:54:06.344433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-23 07:54:06.344453 | orchestrator | 2025-09-23 07:54:06.344472 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-09-23 07:54:06.344492 | orchestrator | Tuesday 23 September 2025 07:50:51 +0000 (0:00:04.842) 0:01:32.556 ***** 2025-09-23 07:54:06.344522 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:54:06.344542 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:54:06.344561 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:54:06.344603 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:54:06.344622 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:54:06.344640 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:54:06.344658 | orchestrator | 2025-09-23 07:54:06.344677 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-09-23 07:54:06.344695 | orchestrator | Tuesday 23 September 2025 07:50:54 +0000 (0:00:02.447) 0:01:35.003 ***** 2025-09-23 07:54:06.344713 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:54:06.344728 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:54:06.344745 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:54:06.344764 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:54:06.344803 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:54:06.344825 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:54:06.344868 | orchestrator | 2025-09-23 07:54:06.344914 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-09-23 07:54:06.344934 | orchestrator | Tuesday 23 September 2025 07:50:57 +0000 (0:00:02.681) 0:01:37.684 ***** 2025-09-23 07:54:06.344953 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:54:06.344971 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:54:06.344989 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:54:06.345006 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:54:06.345024 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:54:06.345044 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:54:06.345064 | orchestrator | 2025-09-23 07:54:06.345083 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-09-23 07:54:06.345103 | orchestrator | Tuesday 23 September 2025 07:51:00 +0000 (0:00:02.999) 0:01:40.683 ***** 2025-09-23 07:54:06.345122 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:54:06.345141 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:54:06.345160 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:54:06.345179 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:54:06.345198 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:54:06.345216 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:54:06.345235 | orchestrator | 2025-09-23 07:54:06.345253 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-09-23 07:54:06.345271 | orchestrator | Tuesday 23 September 2025 07:51:02 +0000 (0:00:02.115) 0:01:42.798 ***** 2025-09-23 07:54:06.345289 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:54:06.345307 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:54:06.345325 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:54:06.345344 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:54:06.345368 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:54:06.345380 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:54:06.345391 | orchestrator | 2025-09-23 07:54:06.345402 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-09-23 07:54:06.345413 | orchestrator | Tuesday 23 September 2025 07:51:05 +0000 (0:00:02.927) 0:01:45.726 ***** 2025-09-23 07:54:06.345424 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:54:06.345435 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:54:06.345446 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:54:06.345457 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:54:06.345467 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:54:06.345478 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:54:06.345489 | orchestrator | 2025-09-23 07:54:06.345499 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-09-23 07:54:06.345511 | orchestrator | Tuesday 23 September 2025 07:51:07 +0000 (0:00:01.977) 0:01:47.703 ***** 2025-09-23 07:54:06.345522 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-23 07:54:06.345556 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:54:06.345568 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-23 07:54:06.345579 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:54:06.345620 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-23 07:54:06.345639 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:54:06.345674 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-23 07:54:06.345695 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:54:06.345713 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-23 07:54:06.345732 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:54:06.345743 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-23 07:54:06.345754 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:54:06.345764 | orchestrator | 2025-09-23 07:54:06.345775 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-09-23 07:54:06.345786 | orchestrator | Tuesday 23 September 2025 07:51:09 +0000 (0:00:02.058) 0:01:49.762 ***** 2025-09-23 07:54:06.345798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-23 07:54:06.345848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-23 07:54:06.345859 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:54:06.345869 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:54:06.345916 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-23 07:54:06.345929 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:54:06.345966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-23 07:54:06.345977 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:54:06.345988 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-23 07:54:06.345998 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:54:06.346008 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-23 07:54:06.346079 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:54:06.346105 | orchestrator | 2025-09-23 07:54:06.346115 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-09-23 07:54:06.346125 | orchestrator | Tuesday 23 September 2025 07:51:11 +0000 (0:00:02.200) 0:01:51.962 ***** 2025-09-23 07:54:06.346135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-23 07:54:06.346146 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:54:06.346167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-23 07:54:06.346201 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:54:06.346213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-23 07:54:06.346222 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:54:06.346232 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-23 07:54:06.346242 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:54:06.346252 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-23 07:54:06.346261 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:54:06.346271 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-23 07:54:06.346287 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:54:06.346297 | orchestrator | 2025-09-23 07:54:06.346306 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-09-23 07:54:06.346316 | orchestrator | Tuesday 23 September 2025 07:51:14 +0000 (0:00:02.945) 0:01:54.908 ***** 2025-09-23 07:54:06.346326 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:54:06.346341 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:54:06.346352 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:54:06.346361 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:54:06.346371 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:54:06.346380 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:54:06.346390 | orchestrator | 2025-09-23 07:54:06.346400 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-09-23 07:54:06.346409 | orchestrator | Tuesday 23 September 2025 07:51:17 +0000 (0:00:03.139) 0:01:58.047 ***** 2025-09-23 07:54:06.346419 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:54:06.346429 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:54:06.346438 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:54:06.346448 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:54:06.346457 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:54:06.346467 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:54:06.346489 | orchestrator | 2025-09-23 07:54:06.346516 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-09-23 07:54:06.346536 | orchestrator | Tuesday 23 September 2025 07:51:24 +0000 (0:00:07.412) 0:02:05.460 ***** 2025-09-23 07:54:06.346552 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:54:06.346570 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:54:06.346588 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:54:06.346606 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:54:06.346624 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:54:06.346642 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:54:06.346661 | orchestrator | 2025-09-23 07:54:06.346680 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-09-23 07:54:06.346691 | orchestrator | Tuesday 23 September 2025 07:51:27 +0000 (0:00:02.875) 0:02:08.336 ***** 2025-09-23 07:54:06.346700 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:54:06.346710 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:54:06.346734 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:54:06.346743 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:54:06.346753 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:54:06.346762 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:54:06.346771 | orchestrator | 2025-09-23 07:54:06.346781 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-09-23 07:54:06.346791 | orchestrator | Tuesday 23 September 2025 07:51:29 +0000 (0:00:01.996) 0:02:10.333 ***** 2025-09-23 07:54:06.346801 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:54:06.346811 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:54:06.346820 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:54:06.346830 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:54:06.346840 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:54:06.346849 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:54:06.346859 | orchestrator | 2025-09-23 07:54:06.346869 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-09-23 07:54:06.346879 | orchestrator | Tuesday 23 September 2025 07:51:31 +0000 (0:00:01.957) 0:02:12.290 ***** 2025-09-23 07:54:06.346947 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:54:06.346959 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:54:06.346969 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:54:06.346978 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:54:06.346988 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:54:06.347006 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:54:06.347016 | orchestrator | 2025-09-23 07:54:06.347026 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-09-23 07:54:06.347036 | orchestrator | Tuesday 23 September 2025 07:51:33 +0000 (0:00:02.099) 0:02:14.390 ***** 2025-09-23 07:54:06.347045 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:54:06.347055 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:54:06.347065 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:54:06.347074 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:54:06.347084 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:54:06.347093 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:54:06.347116 | orchestrator | 2025-09-23 07:54:06.347127 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-09-23 07:54:06.347136 | orchestrator | Tuesday 23 September 2025 07:51:37 +0000 (0:00:03.730) 0:02:18.121 ***** 2025-09-23 07:54:06.347146 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:54:06.347155 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:54:06.347165 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:54:06.347174 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:54:06.347184 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:54:06.347194 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:54:06.347203 | orchestrator | 2025-09-23 07:54:06.347213 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-09-23 07:54:06.347223 | orchestrator | Tuesday 23 September 2025 07:51:39 +0000 (0:00:01.961) 0:02:20.083 ***** 2025-09-23 07:54:06.347233 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:54:06.347243 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:54:06.347252 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:54:06.347261 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:54:06.347271 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:54:06.347280 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:54:06.347290 | orchestrator | 2025-09-23 07:54:06.347300 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-09-23 07:54:06.347310 | orchestrator | Tuesday 23 September 2025 07:51:41 +0000 (0:00:01.656) 0:02:21.740 ***** 2025-09-23 07:54:06.347320 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-23 07:54:06.347332 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:54:06.347341 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-23 07:54:06.347351 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:54:06.347360 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-23 07:54:06.347381 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:54:06.347391 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-23 07:54:06.347402 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:54:06.347421 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-23 07:54:06.347431 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:54:06.347441 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-23 07:54:06.347451 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:54:06.347461 | orchestrator | 2025-09-23 07:54:06.347470 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-09-23 07:54:06.347480 | orchestrator | Tuesday 23 September 2025 07:51:43 +0000 (0:00:01.882) 0:02:23.622 ***** 2025-09-23 07:54:06.347497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-23 07:54:06.347516 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:54:06.347526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-23 07:54:06.347537 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:54:06.347547 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-23 07:54:06.347557 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:54:06.347567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-23 07:54:06.347578 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:54:06.347610 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-23 07:54:06.347659 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:54:06.347676 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-23 07:54:06.347687 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:54:06.347696 | orchestrator | 2025-09-23 07:54:06.347706 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-09-23 07:54:06.347716 | orchestrator | Tuesday 23 September 2025 07:51:45 +0000 (0:00:01.972) 0:02:25.595 ***** 2025-09-23 07:54:06.347727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-23 07:54:06.347753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-23 07:54:06.347773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-23 07:54:06.347789 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-23 07:54:06.347809 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-23 07:54:06.347819 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-23 07:54:06.347829 | orchestrator | 2025-09-23 07:54:06.347839 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-23 07:54:06.347849 | orchestrator | Tuesday 23 September 2025 07:51:48 +0000 (0:00:03.729) 0:02:29.325 ***** 2025-09-23 07:54:06.347859 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:54:06.347875 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:54:06.347914 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:54:06.347931 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:54:06.347944 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:54:06.347959 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:54:06.347974 | orchestrator | 2025-09-23 07:54:06.347987 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-09-23 07:54:06.348001 | orchestrator | Tuesday 23 September 2025 07:51:49 +0000 (0:00:00.701) 0:02:30.026 ***** 2025-09-23 07:54:06.348016 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:54:06.348050 | orchestrator | 2025-09-23 07:54:06.348066 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-09-23 07:54:06.348083 | orchestrator | Tuesday 23 September 2025 07:51:51 +0000 (0:00:02.060) 0:02:32.087 ***** 2025-09-23 07:54:06.348122 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:54:06.348139 | orchestrator | 2025-09-23 07:54:06.348156 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-09-23 07:54:06.348172 | orchestrator | Tuesday 23 September 2025 07:51:53 +0000 (0:00:02.185) 0:02:34.272 ***** 2025-09-23 07:54:06.348187 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:54:06.348197 | orchestrator | 2025-09-23 07:54:06.348207 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-23 07:54:06.348216 | orchestrator | Tuesday 23 September 2025 07:52:38 +0000 (0:00:44.517) 0:03:18.789 ***** 2025-09-23 07:54:06.348240 | orchestrator | 2025-09-23 07:54:06.348250 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-23 07:54:06.348259 | orchestrator | Tuesday 23 September 2025 07:52:38 +0000 (0:00:00.068) 0:03:18.858 ***** 2025-09-23 07:54:06.348269 | orchestrator | 2025-09-23 07:54:06.348279 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-23 07:54:06.348288 | orchestrator | Tuesday 23 September 2025 07:52:38 +0000 (0:00:00.240) 0:03:19.098 ***** 2025-09-23 07:54:06.348298 | orchestrator | 2025-09-23 07:54:06.348307 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-23 07:54:06.348317 | orchestrator | Tuesday 23 September 2025 07:52:38 +0000 (0:00:00.063) 0:03:19.162 ***** 2025-09-23 07:54:06.348326 | orchestrator | 2025-09-23 07:54:06.348346 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-23 07:54:06.348356 | orchestrator | Tuesday 23 September 2025 07:52:38 +0000 (0:00:00.069) 0:03:19.232 ***** 2025-09-23 07:54:06.348366 | orchestrator | 2025-09-23 07:54:06.348376 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-23 07:54:06.348385 | orchestrator | Tuesday 23 September 2025 07:52:38 +0000 (0:00:00.065) 0:03:19.297 ***** 2025-09-23 07:54:06.348394 | orchestrator | 2025-09-23 07:54:06.348404 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-09-23 07:54:06.348413 | orchestrator | Tuesday 23 September 2025 07:52:38 +0000 (0:00:00.067) 0:03:19.365 ***** 2025-09-23 07:54:06.348423 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:54:06.348432 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:54:06.348442 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:54:06.348451 | orchestrator | 2025-09-23 07:54:06.348468 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-09-23 07:54:06.348478 | orchestrator | Tuesday 23 September 2025 07:53:08 +0000 (0:00:29.430) 0:03:48.795 ***** 2025-09-23 07:54:06.348488 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:54:06.348497 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:54:06.348507 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:54:06.348516 | orchestrator | 2025-09-23 07:54:06.348526 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:54:06.348536 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-23 07:54:06.348547 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-23 07:54:06.348556 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-23 07:54:06.348566 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-23 07:54:06.348576 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-23 07:54:06.348589 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-23 07:54:06.348606 | orchestrator | 2025-09-23 07:54:06.348622 | orchestrator | 2025-09-23 07:54:06.348637 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:54:06.348673 | orchestrator | Tuesday 23 September 2025 07:54:03 +0000 (0:00:55.233) 0:04:44.029 ***** 2025-09-23 07:54:06.348689 | orchestrator | =============================================================================== 2025-09-23 07:54:06.348699 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 55.23s 2025-09-23 07:54:06.348709 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 44.52s 2025-09-23 07:54:06.348727 | orchestrator | neutron : Restart neutron-server container ----------------------------- 29.43s 2025-09-23 07:54:06.348736 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.51s 2025-09-23 07:54:06.348746 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 7.41s 2025-09-23 07:54:06.348755 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.15s 2025-09-23 07:54:06.348765 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.72s 2025-09-23 07:54:06.348775 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.84s 2025-09-23 07:54:06.348784 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.71s 2025-09-23 07:54:06.348794 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.53s 2025-09-23 07:54:06.348803 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.98s 2025-09-23 07:54:06.348814 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 3.90s 2025-09-23 07:54:06.348824 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 3.73s 2025-09-23 07:54:06.348845 | orchestrator | neutron : Check neutron containers -------------------------------------- 3.73s 2025-09-23 07:54:06.348865 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.68s 2025-09-23 07:54:06.348875 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.58s 2025-09-23 07:54:06.348906 | orchestrator | neutron : Copying over existing policy file ----------------------------- 3.55s 2025-09-23 07:54:06.348918 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.53s 2025-09-23 07:54:06.348941 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.46s 2025-09-23 07:54:06.348951 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.44s 2025-09-23 07:54:06.348961 | orchestrator | 2025-09-23 07:54:06 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:54:06.348971 | orchestrator | 2025-09-23 07:54:06 | INFO  | Task 03a07530-4faf-45ed-867c-2e074d452393 is in state STARTED 2025-09-23 07:54:06.348981 | orchestrator | 2025-09-23 07:54:06 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:54:09.374103 | orchestrator | 2025-09-23 07:54:09 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:54:09.374669 | orchestrator | 2025-09-23 07:54:09 | INFO  | Task 9a256c78-cd4b-4275-bb25-dd2e2fb0be90 is in state STARTED 2025-09-23 07:54:09.376736 | orchestrator | 2025-09-23 07:54:09 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:54:09.377956 | orchestrator | 2025-09-23 07:54:09 | INFO  | Task 03a07530-4faf-45ed-867c-2e074d452393 is in state STARTED 2025-09-23 07:54:09.377991 | orchestrator | 2025-09-23 07:54:09 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:54:12.419063 | orchestrator | 2025-09-23 07:54:12 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:54:12.421577 | orchestrator | 2025-09-23 07:54:12 | INFO  | Task 9a256c78-cd4b-4275-bb25-dd2e2fb0be90 is in state STARTED 2025-09-23 07:54:12.423672 | orchestrator | 2025-09-23 07:54:12 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:54:12.425586 | orchestrator | 2025-09-23 07:54:12 | INFO  | Task 03a07530-4faf-45ed-867c-2e074d452393 is in state STARTED 2025-09-23 07:54:12.425612 | orchestrator | 2025-09-23 07:54:12 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:54:15.465508 | orchestrator | 2025-09-23 07:54:15 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:54:15.467186 | orchestrator | 2025-09-23 07:54:15 | INFO  | Task 9a256c78-cd4b-4275-bb25-dd2e2fb0be90 is in state SUCCESS 2025-09-23 07:54:15.467644 | orchestrator | 2025-09-23 07:54:15 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:54:15.468843 | orchestrator | 2025-09-23 07:54:15 | INFO  | Task 03a07530-4faf-45ed-867c-2e074d452393 is in state STARTED 2025-09-23 07:54:15.468910 | orchestrator | 2025-09-23 07:54:15 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:54:18.526355 | orchestrator | 2025-09-23 07:54:18 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:54:18.527667 | orchestrator | 2025-09-23 07:54:18 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:54:18.529537 | orchestrator | 2025-09-23 07:54:18 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:54:18.531204 | orchestrator | 2025-09-23 07:54:18 | INFO  | Task 03a07530-4faf-45ed-867c-2e074d452393 is in state STARTED 2025-09-23 07:54:18.531260 | orchestrator | 2025-09-23 07:54:18 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:54:21.573709 | orchestrator | 2025-09-23 07:54:21 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:54:21.575072 | orchestrator | 2025-09-23 07:54:21 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:54:21.575913 | orchestrator | 2025-09-23 07:54:21 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:54:21.576799 | orchestrator | 2025-09-23 07:54:21 | INFO  | Task 03a07530-4faf-45ed-867c-2e074d452393 is in state STARTED 2025-09-23 07:54:21.576820 | orchestrator | 2025-09-23 07:54:21 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:54:24.628556 | orchestrator | 2025-09-23 07:54:24 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:54:24.631753 | orchestrator | 2025-09-23 07:54:24 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:54:24.633328 | orchestrator | 2025-09-23 07:54:24 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:54:24.635129 | orchestrator | 2025-09-23 07:54:24 | INFO  | Task 03a07530-4faf-45ed-867c-2e074d452393 is in state STARTED 2025-09-23 07:54:24.635190 | orchestrator | 2025-09-23 07:54:24 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:54:27.697107 | orchestrator | 2025-09-23 07:54:27 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:54:27.698619 | orchestrator | 2025-09-23 07:54:27 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:54:27.700825 | orchestrator | 2025-09-23 07:54:27 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:54:27.702560 | orchestrator | 2025-09-23 07:54:27 | INFO  | Task 03a07530-4faf-45ed-867c-2e074d452393 is in state STARTED 2025-09-23 07:54:27.702760 | orchestrator | 2025-09-23 07:54:27 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:54:30.744693 | orchestrator | 2025-09-23 07:54:30 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:54:30.746665 | orchestrator | 2025-09-23 07:54:30 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:54:30.748912 | orchestrator | 2025-09-23 07:54:30 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:54:30.749884 | orchestrator | 2025-09-23 07:54:30 | INFO  | Task 03a07530-4faf-45ed-867c-2e074d452393 is in state STARTED 2025-09-23 07:54:30.750098 | orchestrator | 2025-09-23 07:54:30 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:54:33.793475 | orchestrator | 2025-09-23 07:54:33 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:54:33.794404 | orchestrator | 2025-09-23 07:54:33 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:54:33.797001 | orchestrator | 2025-09-23 07:54:33 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:54:33.799194 | orchestrator | 2025-09-23 07:54:33 | INFO  | Task 03a07530-4faf-45ed-867c-2e074d452393 is in state STARTED 2025-09-23 07:54:33.799353 | orchestrator | 2025-09-23 07:54:33 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:54:36.839064 | orchestrator | 2025-09-23 07:54:36 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:54:36.839145 | orchestrator | 2025-09-23 07:54:36 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:54:36.839652 | orchestrator | 2025-09-23 07:54:36 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:54:36.841353 | orchestrator | 2025-09-23 07:54:36 | INFO  | Task 03a07530-4faf-45ed-867c-2e074d452393 is in state SUCCESS 2025-09-23 07:54:36.843587 | orchestrator | 2025-09-23 07:54:36.843625 | orchestrator | 2025-09-23 07:54:36.843637 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-23 07:54:36.843649 | orchestrator | 2025-09-23 07:54:36.843660 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-23 07:54:36.843671 | orchestrator | Tuesday 23 September 2025 07:53:42 +0000 (0:00:00.285) 0:00:00.285 ***** 2025-09-23 07:54:36.843682 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:54:36.843693 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:54:36.843704 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:54:36.843714 | orchestrator | ok: [testbed-manager] 2025-09-23 07:54:36.843730 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:54:36.844119 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:54:36.844142 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:54:36.844153 | orchestrator | 2025-09-23 07:54:36.844164 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-23 07:54:36.844176 | orchestrator | Tuesday 23 September 2025 07:53:42 +0000 (0:00:00.858) 0:00:01.143 ***** 2025-09-23 07:54:36.844187 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-09-23 07:54:36.844198 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-09-23 07:54:36.844208 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-09-23 07:54:36.844220 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-09-23 07:54:36.844231 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-09-23 07:54:36.844242 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-09-23 07:54:36.844252 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-09-23 07:54:36.844263 | orchestrator | 2025-09-23 07:54:36.844274 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-23 07:54:36.844285 | orchestrator | 2025-09-23 07:54:36.844296 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-09-23 07:54:36.844306 | orchestrator | Tuesday 23 September 2025 07:53:43 +0000 (0:00:00.719) 0:00:01.863 ***** 2025-09-23 07:54:36.844318 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:54:36.844329 | orchestrator | 2025-09-23 07:54:36.844340 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-09-23 07:54:36.844351 | orchestrator | Tuesday 23 September 2025 07:53:45 +0000 (0:00:01.858) 0:00:03.721 ***** 2025-09-23 07:54:36.844362 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-09-23 07:54:36.844372 | orchestrator | 2025-09-23 07:54:36.844383 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-09-23 07:54:36.844417 | orchestrator | Tuesday 23 September 2025 07:53:49 +0000 (0:00:03.761) 0:00:07.483 ***** 2025-09-23 07:54:36.844429 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-09-23 07:54:36.844440 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-09-23 07:54:36.844451 | orchestrator | 2025-09-23 07:54:36.844462 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-09-23 07:54:36.844472 | orchestrator | Tuesday 23 September 2025 07:53:55 +0000 (0:00:06.489) 0:00:13.972 ***** 2025-09-23 07:54:36.844483 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-23 07:54:36.844494 | orchestrator | 2025-09-23 07:54:36.844504 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-09-23 07:54:36.844515 | orchestrator | Tuesday 23 September 2025 07:53:59 +0000 (0:00:03.324) 0:00:17.297 ***** 2025-09-23 07:54:36.844526 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-23 07:54:36.844536 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-09-23 07:54:36.844547 | orchestrator | 2025-09-23 07:54:36.844557 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-09-23 07:54:36.844568 | orchestrator | Tuesday 23 September 2025 07:54:03 +0000 (0:00:04.234) 0:00:21.532 ***** 2025-09-23 07:54:36.844578 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-23 07:54:36.844589 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-09-23 07:54:36.844600 | orchestrator | 2025-09-23 07:54:36.844610 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-09-23 07:54:36.844621 | orchestrator | Tuesday 23 September 2025 07:54:10 +0000 (0:00:06.972) 0:00:28.505 ***** 2025-09-23 07:54:36.844645 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-09-23 07:54:36.844656 | orchestrator | 2025-09-23 07:54:36.844667 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:54:36.844677 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:54:36.844688 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:54:36.844699 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:54:36.844710 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:54:36.844721 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:54:36.844745 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:54:36.844758 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:54:36.844770 | orchestrator | 2025-09-23 07:54:36.844783 | orchestrator | 2025-09-23 07:54:36.844796 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:54:36.844809 | orchestrator | Tuesday 23 September 2025 07:54:15 +0000 (0:00:04.786) 0:00:33.292 ***** 2025-09-23 07:54:36.844821 | orchestrator | =============================================================================== 2025-09-23 07:54:36.844834 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.97s 2025-09-23 07:54:36.844847 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.49s 2025-09-23 07:54:36.844881 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.79s 2025-09-23 07:54:36.844893 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.23s 2025-09-23 07:54:36.844914 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.76s 2025-09-23 07:54:36.844926 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.32s 2025-09-23 07:54:36.844939 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.86s 2025-09-23 07:54:36.844951 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.86s 2025-09-23 07:54:36.844964 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.72s 2025-09-23 07:54:36.844976 | orchestrator | 2025-09-23 07:54:36.844988 | orchestrator | 2025-09-23 07:54:36.845000 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-23 07:54:36.845012 | orchestrator | 2025-09-23 07:54:36.845024 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-23 07:54:36.845036 | orchestrator | Tuesday 23 September 2025 07:52:45 +0000 (0:00:00.242) 0:00:00.242 ***** 2025-09-23 07:54:36.845048 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:54:36.845061 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:54:36.845073 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:54:36.845085 | orchestrator | 2025-09-23 07:54:36.845095 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-23 07:54:36.845106 | orchestrator | Tuesday 23 September 2025 07:52:45 +0000 (0:00:00.261) 0:00:00.504 ***** 2025-09-23 07:54:36.845117 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-09-23 07:54:36.845127 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-09-23 07:54:36.845138 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-09-23 07:54:36.845149 | orchestrator | 2025-09-23 07:54:36.845160 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-09-23 07:54:36.845170 | orchestrator | 2025-09-23 07:54:36.845181 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-23 07:54:36.845191 | orchestrator | Tuesday 23 September 2025 07:52:45 +0000 (0:00:00.361) 0:00:00.865 ***** 2025-09-23 07:54:36.845202 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:54:36.845213 | orchestrator | 2025-09-23 07:54:36.845223 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-09-23 07:54:36.845234 | orchestrator | Tuesday 23 September 2025 07:52:46 +0000 (0:00:00.507) 0:00:01.372 ***** 2025-09-23 07:54:36.845245 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-09-23 07:54:36.845255 | orchestrator | 2025-09-23 07:54:36.845266 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-09-23 07:54:36.845277 | orchestrator | Tuesday 23 September 2025 07:52:50 +0000 (0:00:03.871) 0:00:05.244 ***** 2025-09-23 07:54:36.845287 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-09-23 07:54:36.845298 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-09-23 07:54:36.845309 | orchestrator | 2025-09-23 07:54:36.845319 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-09-23 07:54:36.845330 | orchestrator | Tuesday 23 September 2025 07:52:56 +0000 (0:00:06.643) 0:00:11.887 ***** 2025-09-23 07:54:36.845340 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-23 07:54:36.845351 | orchestrator | 2025-09-23 07:54:36.845362 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-09-23 07:54:36.845384 | orchestrator | Tuesday 23 September 2025 07:53:00 +0000 (0:00:03.312) 0:00:15.200 ***** 2025-09-23 07:54:36.845405 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-23 07:54:36.845423 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-09-23 07:54:36.845440 | orchestrator | 2025-09-23 07:54:36.845458 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-09-23 07:54:36.845477 | orchestrator | Tuesday 23 September 2025 07:53:04 +0000 (0:00:04.041) 0:00:19.242 ***** 2025-09-23 07:54:36.845509 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-23 07:54:36.845529 | orchestrator | 2025-09-23 07:54:36.845545 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-09-23 07:54:36.845556 | orchestrator | Tuesday 23 September 2025 07:53:07 +0000 (0:00:03.443) 0:00:22.686 ***** 2025-09-23 07:54:36.845566 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-09-23 07:54:36.845577 | orchestrator | 2025-09-23 07:54:36.845587 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-09-23 07:54:36.845598 | orchestrator | Tuesday 23 September 2025 07:53:12 +0000 (0:00:04.693) 0:00:27.379 ***** 2025-09-23 07:54:36.845608 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:54:36.845619 | orchestrator | 2025-09-23 07:54:36.845629 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-09-23 07:54:36.845650 | orchestrator | Tuesday 23 September 2025 07:53:15 +0000 (0:00:02.924) 0:00:30.304 ***** 2025-09-23 07:54:36.845661 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:54:36.845672 | orchestrator | 2025-09-23 07:54:36.845682 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-09-23 07:54:36.845693 | orchestrator | Tuesday 23 September 2025 07:53:18 +0000 (0:00:03.679) 0:00:33.984 ***** 2025-09-23 07:54:36.845703 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:54:36.845714 | orchestrator | 2025-09-23 07:54:36.845725 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-09-23 07:54:36.845735 | orchestrator | Tuesday 23 September 2025 07:53:22 +0000 (0:00:03.835) 0:00:37.819 ***** 2025-09-23 07:54:36.845750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-23 07:54:36.845765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-23 07:54:36.845777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-23 07:54:36.845800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-23 07:54:36.845820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-23 07:54:36.845832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-23 07:54:36.845844 | orchestrator | 2025-09-23 07:54:36.845873 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-09-23 07:54:36.845884 | orchestrator | Tuesday 23 September 2025 07:53:24 +0000 (0:00:01.382) 0:00:39.202 ***** 2025-09-23 07:54:36.845895 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:54:36.845906 | orchestrator | 2025-09-23 07:54:36.845917 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-09-23 07:54:36.845927 | orchestrator | Tuesday 23 September 2025 07:53:24 +0000 (0:00:00.119) 0:00:39.321 ***** 2025-09-23 07:54:36.845938 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:54:36.845949 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:54:36.845959 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:54:36.845970 | orchestrator | 2025-09-23 07:54:36.845980 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-09-23 07:54:36.845991 | orchestrator | Tuesday 23 September 2025 07:53:24 +0000 (0:00:00.368) 0:00:39.690 ***** 2025-09-23 07:54:36.846001 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-23 07:54:36.846059 | orchestrator | 2025-09-23 07:54:36.846073 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-09-23 07:54:36.846084 | orchestrator | Tuesday 23 September 2025 07:53:25 +0000 (0:00:00.757) 0:00:40.447 ***** 2025-09-23 07:54:36.846095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-23 07:54:36.846119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-23 07:54:36.846139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-23 07:54:36.846151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-23 07:54:36.846162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-23 07:54:36.846179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-23 07:54:36.846190 | orchestrator | 2025-09-23 07:54:36.846201 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-09-23 07:54:36.846212 | orchestrator | Tuesday 23 September 2025 07:53:27 +0000 (0:00:02.481) 0:00:42.929 ***** 2025-09-23 07:54:36.846223 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:54:36.846233 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:54:36.846244 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:54:36.846254 | orchestrator | 2025-09-23 07:54:36.846270 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-23 07:54:36.846281 | orchestrator | Tuesday 23 September 2025 07:53:28 +0000 (0:00:00.304) 0:00:43.234 ***** 2025-09-23 07:54:36.846291 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:54:36.846302 | orchestrator | 2025-09-23 07:54:36.846313 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-09-23 07:54:36.846323 | orchestrator | Tuesday 23 September 2025 07:53:28 +0000 (0:00:00.700) 0:00:43.935 ***** 2025-09-23 07:54:36.846343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-23 07:54:36.846355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-23 07:54:36.846367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-23 07:54:36.846384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-23 07:54:36.846400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-23 07:54:36.846418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-23 07:54:36.846429 | orchestrator | 2025-09-23 07:54:36.846440 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-09-23 07:54:36.846451 | orchestrator | Tuesday 23 September 2025 07:53:31 +0000 (0:00:02.452) 0:00:46.387 ***** 2025-09-23 07:54:36.846462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-23 07:54:36.846479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-23 07:54:36.846490 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:54:36.846502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-23 07:54:36.846523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-23 07:54:36.846534 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:54:36.846553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-23 07:54:36.846564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-23 07:54:36.846581 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:54:36.846592 | orchestrator | 2025-09-23 07:54:36.846603 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-09-23 07:54:36.846614 | orchestrator | Tuesday 23 September 2025 07:53:31 +0000 (0:00:00.661) 0:00:47.049 ***** 2025-09-23 07:54:36.846625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-23 07:54:36.846641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-23 07:54:36.846652 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:54:36.846669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-23 07:54:36.846681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-23 07:54:36.846692 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:54:36.846703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-23 07:54:36.846724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-23 07:54:36.846735 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:54:36.846746 | orchestrator | 2025-09-23 07:54:36.846757 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-09-23 07:54:36.846768 | orchestrator | Tuesday 23 September 2025 07:53:33 +0000 (0:00:01.251) 0:00:48.300 ***** 2025-09-23 07:54:36.846783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-23 07:54:36.846801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-23 07:54:36.846814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-23 07:54:36.846831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-23 07:54:36.846842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-23 07:54:36.846919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-23 07:54:36.846932 | orchestrator | 2025-09-23 07:54:36.846943 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-09-23 07:54:36.846954 | orchestrator | Tuesday 23 September 2025 07:53:35 +0000 (0:00:02.484) 0:00:50.785 ***** 2025-09-23 07:54:36.846973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-23 07:54:36.846991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-23 07:54:36.847002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-23 07:54:36.847013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-23 07:54:36.847029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-23 07:54:36.847047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-23 07:54:36.847065 | orchestrator | 2025-09-23 07:54:36.847076 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-09-23 07:54:36.847087 | orchestrator | Tuesday 23 September 2025 07:53:40 +0000 (0:00:05.015) 0:00:55.801 ***** 2025-09-23 07:54:36.847098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-23 07:54:36.847109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-23 07:54:36.847121 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:54:36.847132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-23 07:54:36.847147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-23 07:54:36.847159 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:54:36.847177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-23 07:54:36.847194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-23 07:54:36.847205 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:54:36.847216 | orchestrator | 2025-09-23 07:54:36.847227 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-09-23 07:54:36.847237 | orchestrator | Tuesday 23 September 2025 07:53:41 +0000 (0:00:00.604) 0:00:56.405 ***** 2025-09-23 07:54:36.847248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-23 07:54:36.847264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-23 07:54:36.847282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-23 07:54:36.847301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-23 07:54:36.847312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-23 07:54:36.847323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-23 07:54:36.847334 | orchestrator | 2025-09-23 07:54:36.847345 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-23 07:54:36.847356 | orchestrator | Tuesday 23 September 2025 07:53:43 +0000 (0:00:02.554) 0:00:58.959 ***** 2025-09-23 07:54:36.847366 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:54:36.847377 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:54:36.847388 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:54:36.847399 | orchestrator | 2025-09-23 07:54:36.847409 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-09-23 07:54:36.847420 | orchestrator | Tuesday 23 September 2025 07:53:44 +0000 (0:00:00.304) 0:00:59.264 ***** 2025-09-23 07:54:36.847430 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:54:36.847441 | orchestrator | 2025-09-23 07:54:36.847452 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-09-23 07:54:36.847462 | orchestrator | Tuesday 23 September 2025 07:53:46 +0000 (0:00:02.508) 0:01:01.772 ***** 2025-09-23 07:54:36.847473 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:54:36.847483 | orchestrator | 2025-09-23 07:54:36.847494 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-09-23 07:54:36.847509 | orchestrator | Tuesday 23 September 2025 07:53:48 +0000 (0:00:02.292) 0:01:04.065 ***** 2025-09-23 07:54:36.847520 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:54:36.847540 | orchestrator | 2025-09-23 07:54:36.847551 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-23 07:54:36.847561 | orchestrator | Tuesday 23 September 2025 07:54:06 +0000 (0:00:17.642) 0:01:21.707 ***** 2025-09-23 07:54:36.847572 | orchestrator | 2025-09-23 07:54:36.847582 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-23 07:54:36.847593 | orchestrator | Tuesday 23 September 2025 07:54:06 +0000 (0:00:00.068) 0:01:21.776 ***** 2025-09-23 07:54:36.847603 | orchestrator | 2025-09-23 07:54:36.847614 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-23 07:54:36.847624 | orchestrator | Tuesday 23 September 2025 07:54:06 +0000 (0:00:00.065) 0:01:21.841 ***** 2025-09-23 07:54:36.847635 | orchestrator | 2025-09-23 07:54:36.847645 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-09-23 07:54:36.847656 | orchestrator | Tuesday 23 September 2025 07:54:06 +0000 (0:00:00.070) 0:01:21.912 ***** 2025-09-23 07:54:36.847666 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:54:36.847677 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:54:36.847688 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:54:36.847698 | orchestrator | 2025-09-23 07:54:36.847709 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-09-23 07:54:36.847726 | orchestrator | Tuesday 23 September 2025 07:54:20 +0000 (0:00:13.950) 0:01:35.863 ***** 2025-09-23 07:54:36.847737 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:54:36.847748 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:54:36.847758 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:54:36.847769 | orchestrator | 2025-09-23 07:54:36.847780 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:54:36.847790 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-23 07:54:36.847802 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-23 07:54:36.847812 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-23 07:54:36.847823 | orchestrator | 2025-09-23 07:54:36.847834 | orchestrator | 2025-09-23 07:54:36.847844 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:54:36.847871 | orchestrator | Tuesday 23 September 2025 07:54:35 +0000 (0:00:15.005) 0:01:50.869 ***** 2025-09-23 07:54:36.847882 | orchestrator | =============================================================================== 2025-09-23 07:54:36.847892 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 17.64s 2025-09-23 07:54:36.847903 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 15.01s 2025-09-23 07:54:36.847914 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 13.95s 2025-09-23 07:54:36.847924 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.64s 2025-09-23 07:54:36.847935 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.02s 2025-09-23 07:54:36.847945 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.69s 2025-09-23 07:54:36.847956 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.04s 2025-09-23 07:54:36.847966 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.87s 2025-09-23 07:54:36.847977 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.84s 2025-09-23 07:54:36.847988 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.68s 2025-09-23 07:54:36.847998 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.44s 2025-09-23 07:54:36.848009 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.31s 2025-09-23 07:54:36.848029 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 2.92s 2025-09-23 07:54:36.848039 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.55s 2025-09-23 07:54:36.848050 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.51s 2025-09-23 07:54:36.848061 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.48s 2025-09-23 07:54:36.848071 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.48s 2025-09-23 07:54:36.848082 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.45s 2025-09-23 07:54:36.848092 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.29s 2025-09-23 07:54:36.848103 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.38s 2025-09-23 07:54:36.848114 | orchestrator | 2025-09-23 07:54:36 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:54:39.869414 | orchestrator | 2025-09-23 07:54:39 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:54:39.870177 | orchestrator | 2025-09-23 07:54:39 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:54:39.871253 | orchestrator | 2025-09-23 07:54:39 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:54:39.872057 | orchestrator | 2025-09-23 07:54:39 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:54:39.872082 | orchestrator | 2025-09-23 07:54:39 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:54:42.906375 | orchestrator | 2025-09-23 07:54:42 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:54:42.906998 | orchestrator | 2025-09-23 07:54:42 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:54:42.907818 | orchestrator | 2025-09-23 07:54:42 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:54:42.908568 | orchestrator | 2025-09-23 07:54:42 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:54:42.908633 | orchestrator | 2025-09-23 07:54:42 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:54:45.943208 | orchestrator | 2025-09-23 07:54:45 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:54:45.946184 | orchestrator | 2025-09-23 07:54:45 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:54:45.950585 | orchestrator | 2025-09-23 07:54:45 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:54:45.950634 | orchestrator | 2025-09-23 07:54:45 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:54:45.950647 | orchestrator | 2025-09-23 07:54:45 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:54:48.990220 | orchestrator | 2025-09-23 07:54:48 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:54:48.992193 | orchestrator | 2025-09-23 07:54:48 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:54:48.994445 | orchestrator | 2025-09-23 07:54:48 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:54:48.995611 | orchestrator | 2025-09-23 07:54:48 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:54:48.995643 | orchestrator | 2025-09-23 07:54:48 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:54:52.031132 | orchestrator | 2025-09-23 07:54:52 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:54:52.031229 | orchestrator | 2025-09-23 07:54:52 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:54:52.031922 | orchestrator | 2025-09-23 07:54:52 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:54:52.032567 | orchestrator | 2025-09-23 07:54:52 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:54:52.032798 | orchestrator | 2025-09-23 07:54:52 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:54:55.063428 | orchestrator | 2025-09-23 07:54:55 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:54:55.063537 | orchestrator | 2025-09-23 07:54:55 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:54:55.064296 | orchestrator | 2025-09-23 07:54:55 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:54:55.066967 | orchestrator | 2025-09-23 07:54:55 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:54:55.067021 | orchestrator | 2025-09-23 07:54:55 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:54:58.094776 | orchestrator | 2025-09-23 07:54:58 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:54:58.095188 | orchestrator | 2025-09-23 07:54:58 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:54:58.095975 | orchestrator | 2025-09-23 07:54:58 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:54:58.097084 | orchestrator | 2025-09-23 07:54:58 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:54:58.097115 | orchestrator | 2025-09-23 07:54:58 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:55:01.127648 | orchestrator | 2025-09-23 07:55:01 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:55:01.129182 | orchestrator | 2025-09-23 07:55:01 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:55:01.131397 | orchestrator | 2025-09-23 07:55:01 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:55:01.132984 | orchestrator | 2025-09-23 07:55:01 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:55:01.133205 | orchestrator | 2025-09-23 07:55:01 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:55:04.168114 | orchestrator | 2025-09-23 07:55:04 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:55:04.168212 | orchestrator | 2025-09-23 07:55:04 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:55:04.168887 | orchestrator | 2025-09-23 07:55:04 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:55:04.170966 | orchestrator | 2025-09-23 07:55:04 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:55:04.171030 | orchestrator | 2025-09-23 07:55:04 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:55:07.203179 | orchestrator | 2025-09-23 07:55:07 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:55:07.204197 | orchestrator | 2025-09-23 07:55:07 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:55:07.204780 | orchestrator | 2025-09-23 07:55:07 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:55:07.206618 | orchestrator | 2025-09-23 07:55:07 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:55:07.206657 | orchestrator | 2025-09-23 07:55:07 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:55:10.238300 | orchestrator | 2025-09-23 07:55:10 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:55:10.240527 | orchestrator | 2025-09-23 07:55:10 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:55:10.245567 | orchestrator | 2025-09-23 07:55:10 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:55:10.245621 | orchestrator | 2025-09-23 07:55:10 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:55:10.245634 | orchestrator | 2025-09-23 07:55:10 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:55:13.277658 | orchestrator | 2025-09-23 07:55:13 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:55:13.280086 | orchestrator | 2025-09-23 07:55:13 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:55:13.281752 | orchestrator | 2025-09-23 07:55:13 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:55:13.283390 | orchestrator | 2025-09-23 07:55:13 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:55:13.283862 | orchestrator | 2025-09-23 07:55:13 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:55:16.373179 | orchestrator | 2025-09-23 07:55:16 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:55:16.373772 | orchestrator | 2025-09-23 07:55:16 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:55:16.374369 | orchestrator | 2025-09-23 07:55:16 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:55:16.375218 | orchestrator | 2025-09-23 07:55:16 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:55:16.375257 | orchestrator | 2025-09-23 07:55:16 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:55:19.395724 | orchestrator | 2025-09-23 07:55:19 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:55:19.396312 | orchestrator | 2025-09-23 07:55:19 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:55:19.398604 | orchestrator | 2025-09-23 07:55:19 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:55:19.401116 | orchestrator | 2025-09-23 07:55:19 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:55:19.401878 | orchestrator | 2025-09-23 07:55:19 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:55:22.426110 | orchestrator | 2025-09-23 07:55:22 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:55:22.427239 | orchestrator | 2025-09-23 07:55:22 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:55:22.428110 | orchestrator | 2025-09-23 07:55:22 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:55:22.428718 | orchestrator | 2025-09-23 07:55:22 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:55:22.430204 | orchestrator | 2025-09-23 07:55:22 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:55:25.452519 | orchestrator | 2025-09-23 07:55:25 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:55:25.453006 | orchestrator | 2025-09-23 07:55:25 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:55:25.453367 | orchestrator | 2025-09-23 07:55:25 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:55:25.454170 | orchestrator | 2025-09-23 07:55:25 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:55:25.454392 | orchestrator | 2025-09-23 07:55:25 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:55:28.504605 | orchestrator | 2025-09-23 07:55:28 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:55:28.504686 | orchestrator | 2025-09-23 07:55:28 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:55:28.505162 | orchestrator | 2025-09-23 07:55:28 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:55:28.506118 | orchestrator | 2025-09-23 07:55:28 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:55:28.506153 | orchestrator | 2025-09-23 07:55:28 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:55:31.530976 | orchestrator | 2025-09-23 07:55:31 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:55:31.532902 | orchestrator | 2025-09-23 07:55:31 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:55:31.533295 | orchestrator | 2025-09-23 07:55:31 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:55:31.533949 | orchestrator | 2025-09-23 07:55:31 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:55:31.533986 | orchestrator | 2025-09-23 07:55:31 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:55:34.566245 | orchestrator | 2025-09-23 07:55:34 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:55:34.567059 | orchestrator | 2025-09-23 07:55:34 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:55:34.568018 | orchestrator | 2025-09-23 07:55:34 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:55:34.569077 | orchestrator | 2025-09-23 07:55:34 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:55:34.575824 | orchestrator | 2025-09-23 07:55:34 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:55:37.617615 | orchestrator | 2025-09-23 07:55:37 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:55:37.618003 | orchestrator | 2025-09-23 07:55:37 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:55:37.620734 | orchestrator | 2025-09-23 07:55:37 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:55:37.621758 | orchestrator | 2025-09-23 07:55:37 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:55:37.621850 | orchestrator | 2025-09-23 07:55:37 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:55:40.663401 | orchestrator | 2025-09-23 07:55:40 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:55:40.664927 | orchestrator | 2025-09-23 07:55:40 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:55:40.666112 | orchestrator | 2025-09-23 07:55:40 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:55:40.667530 | orchestrator | 2025-09-23 07:55:40 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:55:40.667621 | orchestrator | 2025-09-23 07:55:40 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:55:43.707592 | orchestrator | 2025-09-23 07:55:43 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:55:43.709105 | orchestrator | 2025-09-23 07:55:43 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:55:43.710886 | orchestrator | 2025-09-23 07:55:43 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:55:43.712536 | orchestrator | 2025-09-23 07:55:43 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:55:43.712596 | orchestrator | 2025-09-23 07:55:43 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:55:46.754709 | orchestrator | 2025-09-23 07:55:46 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:55:46.755646 | orchestrator | 2025-09-23 07:55:46 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:55:46.757623 | orchestrator | 2025-09-23 07:55:46 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:55:46.759374 | orchestrator | 2025-09-23 07:55:46 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:55:46.759655 | orchestrator | 2025-09-23 07:55:46 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:55:49.801714 | orchestrator | 2025-09-23 07:55:49 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:55:49.804989 | orchestrator | 2025-09-23 07:55:49 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:55:49.806494 | orchestrator | 2025-09-23 07:55:49 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:55:49.808323 | orchestrator | 2025-09-23 07:55:49 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:55:49.808357 | orchestrator | 2025-09-23 07:55:49 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:55:52.853568 | orchestrator | 2025-09-23 07:55:52 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:55:52.855022 | orchestrator | 2025-09-23 07:55:52 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:55:52.857095 | orchestrator | 2025-09-23 07:55:52 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:55:52.859611 | orchestrator | 2025-09-23 07:55:52 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:55:52.859654 | orchestrator | 2025-09-23 07:55:52 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:55:55.907256 | orchestrator | 2025-09-23 07:55:55 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:55:55.908534 | orchestrator | 2025-09-23 07:55:55 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:55:55.910537 | orchestrator | 2025-09-23 07:55:55 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:55:55.912098 | orchestrator | 2025-09-23 07:55:55 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:55:55.912138 | orchestrator | 2025-09-23 07:55:55 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:55:58.963283 | orchestrator | 2025-09-23 07:55:58 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:55:58.965369 | orchestrator | 2025-09-23 07:55:58 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:55:58.967326 | orchestrator | 2025-09-23 07:55:58 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:55:58.968977 | orchestrator | 2025-09-23 07:55:58 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:55:58.969104 | orchestrator | 2025-09-23 07:55:58 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:56:02.009813 | orchestrator | 2025-09-23 07:56:02 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:56:02.011583 | orchestrator | 2025-09-23 07:56:02 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:56:02.013738 | orchestrator | 2025-09-23 07:56:02 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:56:02.015347 | orchestrator | 2025-09-23 07:56:02 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:56:02.015390 | orchestrator | 2025-09-23 07:56:02 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:56:05.055037 | orchestrator | 2025-09-23 07:56:05 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:56:05.056220 | orchestrator | 2025-09-23 07:56:05 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:56:05.057627 | orchestrator | 2025-09-23 07:56:05 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:56:05.059313 | orchestrator | 2025-09-23 07:56:05 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:56:05.059362 | orchestrator | 2025-09-23 07:56:05 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:56:08.100237 | orchestrator | 2025-09-23 07:56:08 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:56:08.101570 | orchestrator | 2025-09-23 07:56:08 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:56:08.102159 | orchestrator | 2025-09-23 07:56:08 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:56:08.102862 | orchestrator | 2025-09-23 07:56:08 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:56:08.102877 | orchestrator | 2025-09-23 07:56:08 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:56:11.144553 | orchestrator | 2025-09-23 07:56:11 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:56:11.145851 | orchestrator | 2025-09-23 07:56:11 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:56:11.147278 | orchestrator | 2025-09-23 07:56:11 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:56:11.149032 | orchestrator | 2025-09-23 07:56:11 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:56:11.149062 | orchestrator | 2025-09-23 07:56:11 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:56:14.194565 | orchestrator | 2025-09-23 07:56:14 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:56:14.196722 | orchestrator | 2025-09-23 07:56:14 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:56:14.198971 | orchestrator | 2025-09-23 07:56:14 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:56:14.200631 | orchestrator | 2025-09-23 07:56:14 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:56:14.200949 | orchestrator | 2025-09-23 07:56:14 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:56:17.243500 | orchestrator | 2025-09-23 07:56:17 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:56:17.244270 | orchestrator | 2025-09-23 07:56:17 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:56:17.245150 | orchestrator | 2025-09-23 07:56:17 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:56:17.245882 | orchestrator | 2025-09-23 07:56:17 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:56:17.246114 | orchestrator | 2025-09-23 07:56:17 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:56:20.295955 | orchestrator | 2025-09-23 07:56:20 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:56:20.296033 | orchestrator | 2025-09-23 07:56:20 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:56:20.296774 | orchestrator | 2025-09-23 07:56:20 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:56:20.298396 | orchestrator | 2025-09-23 07:56:20 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:56:20.298538 | orchestrator | 2025-09-23 07:56:20 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:56:23.343428 | orchestrator | 2025-09-23 07:56:23 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:56:23.343516 | orchestrator | 2025-09-23 07:56:23 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:56:23.345097 | orchestrator | 2025-09-23 07:56:23 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:56:23.347080 | orchestrator | 2025-09-23 07:56:23 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:56:23.347115 | orchestrator | 2025-09-23 07:56:23 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:56:26.383258 | orchestrator | 2025-09-23 07:56:26 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:56:26.383879 | orchestrator | 2025-09-23 07:56:26 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:56:26.384490 | orchestrator | 2025-09-23 07:56:26 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:56:26.385215 | orchestrator | 2025-09-23 07:56:26 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:56:26.385354 | orchestrator | 2025-09-23 07:56:26 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:56:29.422254 | orchestrator | 2025-09-23 07:56:29 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:56:29.422889 | orchestrator | 2025-09-23 07:56:29 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:56:29.423460 | orchestrator | 2025-09-23 07:56:29 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:56:29.424531 | orchestrator | 2025-09-23 07:56:29 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:56:29.424561 | orchestrator | 2025-09-23 07:56:29 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:56:32.457168 | orchestrator | 2025-09-23 07:56:32 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:56:32.457608 | orchestrator | 2025-09-23 07:56:32 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:56:32.458327 | orchestrator | 2025-09-23 07:56:32 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:56:32.458889 | orchestrator | 2025-09-23 07:56:32 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:56:32.458922 | orchestrator | 2025-09-23 07:56:32 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:56:35.480765 | orchestrator | 2025-09-23 07:56:35 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:56:35.481673 | orchestrator | 2025-09-23 07:56:35 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:56:35.482676 | orchestrator | 2025-09-23 07:56:35 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:56:35.483840 | orchestrator | 2025-09-23 07:56:35 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:56:35.483865 | orchestrator | 2025-09-23 07:56:35 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:56:38.518138 | orchestrator | 2025-09-23 07:56:38 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:56:38.520466 | orchestrator | 2025-09-23 07:56:38 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:56:38.522594 | orchestrator | 2025-09-23 07:56:38 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:56:38.523886 | orchestrator | 2025-09-23 07:56:38 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:56:38.524136 | orchestrator | 2025-09-23 07:56:38 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:56:41.552589 | orchestrator | 2025-09-23 07:56:41 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:56:41.554453 | orchestrator | 2025-09-23 07:56:41 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:56:41.555828 | orchestrator | 2025-09-23 07:56:41 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:56:41.557176 | orchestrator | 2025-09-23 07:56:41 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:56:41.557211 | orchestrator | 2025-09-23 07:56:41 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:56:44.605044 | orchestrator | 2025-09-23 07:56:44 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:56:44.608759 | orchestrator | 2025-09-23 07:56:44 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:56:44.609903 | orchestrator | 2025-09-23 07:56:44 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:56:44.611001 | orchestrator | 2025-09-23 07:56:44 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:56:44.611059 | orchestrator | 2025-09-23 07:56:44 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:56:47.656680 | orchestrator | 2025-09-23 07:56:47 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:56:47.658368 | orchestrator | 2025-09-23 07:56:47 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:56:47.661077 | orchestrator | 2025-09-23 07:56:47 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state STARTED 2025-09-23 07:56:47.663977 | orchestrator | 2025-09-23 07:56:47 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:56:47.664005 | orchestrator | 2025-09-23 07:56:47 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:56:50.717271 | orchestrator | 2025-09-23 07:56:50 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:56:50.719713 | orchestrator | 2025-09-23 07:56:50 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:56:50.723320 | orchestrator | 2025-09-23 07:56:50 | INFO  | Task 45aa64dd-afce-4f16-b48f-9630762a9ba1 is in state SUCCESS 2025-09-23 07:56:50.723537 | orchestrator | 2025-09-23 07:56:50.725648 | orchestrator | 2025-09-23 07:56:50.725680 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-23 07:56:50.725720 | orchestrator | 2025-09-23 07:56:50.725732 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-23 07:56:50.725765 | orchestrator | Tuesday 23 September 2025 07:54:07 +0000 (0:00:00.292) 0:00:00.292 ***** 2025-09-23 07:56:50.725777 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:56:50.725789 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:56:50.725800 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:56:50.725811 | orchestrator | 2025-09-23 07:56:50.725822 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-23 07:56:50.725833 | orchestrator | Tuesday 23 September 2025 07:54:08 +0000 (0:00:00.388) 0:00:00.680 ***** 2025-09-23 07:56:50.725843 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-09-23 07:56:50.725854 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-09-23 07:56:50.725865 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-09-23 07:56:50.725876 | orchestrator | 2025-09-23 07:56:50.725886 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-09-23 07:56:50.725897 | orchestrator | 2025-09-23 07:56:50.725908 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-23 07:56:50.725918 | orchestrator | Tuesday 23 September 2025 07:54:08 +0000 (0:00:00.533) 0:00:01.214 ***** 2025-09-23 07:56:50.725929 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:56:50.725940 | orchestrator | 2025-09-23 07:56:50.725951 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-09-23 07:56:50.725961 | orchestrator | Tuesday 23 September 2025 07:54:09 +0000 (0:00:00.609) 0:00:01.824 ***** 2025-09-23 07:56:50.725972 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-09-23 07:56:50.725983 | orchestrator | 2025-09-23 07:56:50.725993 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-09-23 07:56:50.726004 | orchestrator | Tuesday 23 September 2025 07:54:12 +0000 (0:00:03.361) 0:00:05.185 ***** 2025-09-23 07:56:50.726134 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-09-23 07:56:50.726151 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-09-23 07:56:50.726162 | orchestrator | 2025-09-23 07:56:50.726174 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-09-23 07:56:50.726185 | orchestrator | Tuesday 23 September 2025 07:54:19 +0000 (0:00:06.788) 0:00:11.974 ***** 2025-09-23 07:56:50.726196 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-23 07:56:50.726207 | orchestrator | 2025-09-23 07:56:50.726218 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-09-23 07:56:50.726229 | orchestrator | Tuesday 23 September 2025 07:54:22 +0000 (0:00:03.601) 0:00:15.575 ***** 2025-09-23 07:56:50.726240 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-23 07:56:50.726253 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-09-23 07:56:50.726266 | orchestrator | 2025-09-23 07:56:50.726278 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-09-23 07:56:50.726290 | orchestrator | Tuesday 23 September 2025 07:54:27 +0000 (0:00:04.191) 0:00:19.767 ***** 2025-09-23 07:56:50.726302 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-23 07:56:50.726315 | orchestrator | 2025-09-23 07:56:50.726327 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-09-23 07:56:50.726339 | orchestrator | Tuesday 23 September 2025 07:54:30 +0000 (0:00:03.588) 0:00:23.356 ***** 2025-09-23 07:56:50.726351 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-09-23 07:56:50.726377 | orchestrator | 2025-09-23 07:56:50.726390 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-09-23 07:56:50.726402 | orchestrator | Tuesday 23 September 2025 07:54:35 +0000 (0:00:04.324) 0:00:27.680 ***** 2025-09-23 07:56:50.726443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-23 07:56:50.726472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-23 07:56:50.726487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-23 07:56:50.726507 | orchestrator | 2025-09-23 07:56:50.726519 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-23 07:56:50.726532 | orchestrator | Tuesday 23 September 2025 07:54:38 +0000 (0:00:03.135) 0:00:30.816 ***** 2025-09-23 07:56:50.726549 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:56:50.726562 | orchestrator | 2025-09-23 07:56:50.726580 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-09-23 07:56:50.726594 | orchestrator | Tuesday 23 September 2025 07:54:38 +0000 (0:00:00.579) 0:00:31.396 ***** 2025-09-23 07:56:50.726605 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:56:50.726616 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:56:50.726627 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:56:50.726638 | orchestrator | 2025-09-23 07:56:50.726648 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-09-23 07:56:50.726659 | orchestrator | Tuesday 23 September 2025 07:54:42 +0000 (0:00:03.326) 0:00:34.722 ***** 2025-09-23 07:56:50.726670 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-23 07:56:50.726681 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-23 07:56:50.726714 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-23 07:56:50.726725 | orchestrator | 2025-09-23 07:56:50.726735 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-09-23 07:56:50.726746 | orchestrator | Tuesday 23 September 2025 07:54:43 +0000 (0:00:01.476) 0:00:36.199 ***** 2025-09-23 07:56:50.726757 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-23 07:56:50.726767 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-23 07:56:50.726778 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-23 07:56:50.726789 | orchestrator | 2025-09-23 07:56:50.726800 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-09-23 07:56:50.726811 | orchestrator | Tuesday 23 September 2025 07:54:44 +0000 (0:00:01.134) 0:00:37.334 ***** 2025-09-23 07:56:50.726821 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:56:50.726832 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:56:50.726843 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:56:50.726853 | orchestrator | 2025-09-23 07:56:50.726864 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-09-23 07:56:50.726875 | orchestrator | Tuesday 23 September 2025 07:54:45 +0000 (0:00:00.643) 0:00:37.977 ***** 2025-09-23 07:56:50.726885 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:56:50.726896 | orchestrator | 2025-09-23 07:56:50.726907 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-09-23 07:56:50.726918 | orchestrator | Tuesday 23 September 2025 07:54:45 +0000 (0:00:00.227) 0:00:38.205 ***** 2025-09-23 07:56:50.726935 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:56:50.726946 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:56:50.726957 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:56:50.726968 | orchestrator | 2025-09-23 07:56:50.726978 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-23 07:56:50.726989 | orchestrator | Tuesday 23 September 2025 07:54:45 +0000 (0:00:00.270) 0:00:38.476 ***** 2025-09-23 07:56:50.727000 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:56:50.727011 | orchestrator | 2025-09-23 07:56:50.727021 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-09-23 07:56:50.727032 | orchestrator | Tuesday 23 September 2025 07:54:46 +0000 (0:00:00.518) 0:00:38.995 ***** 2025-09-23 07:56:50.727055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-23 07:56:50.727068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-23 07:56:50.727087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-23 07:56:50.727099 | orchestrator | 2025-09-23 07:56:50.727110 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-09-23 07:56:50.727121 | orchestrator | Tuesday 23 September 2025 07:54:49 +0000 (0:00:03.509) 0:00:42.504 ***** 2025-09-23 07:56:50.727145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-23 07:56:50.727158 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:56:50.727171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-23 07:56:50.727188 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:56:50.727212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-23 07:56:50.727225 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:56:50.727236 | orchestrator | 2025-09-23 07:56:50.727247 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-09-23 07:56:50.727258 | orchestrator | Tuesday 23 September 2025 07:54:54 +0000 (0:00:04.187) 0:00:46.691 ***** 2025-09-23 07:56:50.727270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-23 07:56:50.727287 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:56:50.727310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-23 07:56:50.727323 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:56:50.727334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-23 07:56:50.727357 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:56:50.727368 | orchestrator | 2025-09-23 07:56:50.727379 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-09-23 07:56:50.727390 | orchestrator | Tuesday 23 September 2025 07:54:58 +0000 (0:00:04.043) 0:00:50.735 ***** 2025-09-23 07:56:50.727401 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:56:50.727412 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:56:50.727423 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:56:50.727434 | orchestrator | 2025-09-23 07:56:50.727445 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-09-23 07:56:50.727456 | orchestrator | Tuesday 23 September 2025 07:55:01 +0000 (0:00:03.625) 0:00:54.361 ***** 2025-09-23 07:56:50.727477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-23 07:56:50.727491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-23 07:56:50.727510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-23 07:56:50.727523 | orchestrator | 2025-09-23 07:56:50.727534 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-09-23 07:56:50.727544 | orchestrator | Tuesday 23 September 2025 07:55:05 +0000 (0:00:03.808) 0:00:58.169 ***** 2025-09-23 07:56:50.727555 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:56:50.727566 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:56:50.727577 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:56:50.727588 | orchestrator | 2025-09-23 07:56:50.727599 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-09-23 07:56:50.727610 | orchestrator | Tuesday 23 September 2025 07:55:10 +0000 (0:00:05.355) 0:01:03.525 ***** 2025-09-23 07:56:50.727621 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:56:50.727632 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:56:50.727647 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:56:50.727659 | orchestrator | 2025-09-23 07:56:50.727670 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-09-23 07:56:50.727840 | orchestrator | Tuesday 23 September 2025 07:55:15 +0000 (0:00:04.548) 0:01:08.073 ***** 2025-09-23 07:56:50.727856 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:56:50.727867 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:56:50.727886 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:56:50.727897 | orchestrator | 2025-09-23 07:56:50.727908 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-09-23 07:56:50.727919 | orchestrator | Tuesday 23 September 2025 07:55:19 +0000 (0:00:03.763) 0:01:11.837 ***** 2025-09-23 07:56:50.727929 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:56:50.727940 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:56:50.727951 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:56:50.727962 | orchestrator | 2025-09-23 07:56:50.727973 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-09-23 07:56:50.727984 | orchestrator | Tuesday 23 September 2025 07:55:22 +0000 (0:00:03.631) 0:01:15.469 ***** 2025-09-23 07:56:50.727994 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:56:50.728005 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:56:50.728015 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:56:50.728026 | orchestrator | 2025-09-23 07:56:50.728037 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-09-23 07:56:50.728048 | orchestrator | Tuesday 23 September 2025 07:55:28 +0000 (0:00:05.209) 0:01:20.678 ***** 2025-09-23 07:56:50.728058 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:56:50.728069 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:56:50.728080 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:56:50.728090 | orchestrator | 2025-09-23 07:56:50.728101 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-09-23 07:56:50.728112 | orchestrator | Tuesday 23 September 2025 07:55:28 +0000 (0:00:00.297) 0:01:20.976 ***** 2025-09-23 07:56:50.728122 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-23 07:56:50.728133 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:56:50.728144 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-23 07:56:50.728154 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:56:50.728165 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-23 07:56:50.728176 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:56:50.728186 | orchestrator | 2025-09-23 07:56:50.728197 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-09-23 07:56:50.728207 | orchestrator | Tuesday 23 September 2025 07:55:32 +0000 (0:00:04.365) 0:01:25.341 ***** 2025-09-23 07:56:50.728219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-23 07:56:50.728252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-23 07:56:50.728266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-23 07:56:50.728278 | orchestrator | 2025-09-23 07:56:50.728289 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-23 07:56:50.728300 | orchestrator | Tuesday 23 September 2025 07:55:37 +0000 (0:00:04.718) 0:01:30.060 ***** 2025-09-23 07:56:50.728311 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:56:50.728321 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:56:50.728332 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:56:50.728349 | orchestrator | 2025-09-23 07:56:50.728360 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-09-23 07:56:50.728371 | orchestrator | Tuesday 23 September 2025 07:55:37 +0000 (0:00:00.498) 0:01:30.558 ***** 2025-09-23 07:56:50.728381 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:56:50.728392 | orchestrator | 2025-09-23 07:56:50.728403 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-09-23 07:56:50.728414 | orchestrator | Tuesday 23 September 2025 07:55:40 +0000 (0:00:02.436) 0:01:32.995 ***** 2025-09-23 07:56:50.728424 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:56:50.728435 | orchestrator | 2025-09-23 07:56:50.728446 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-09-23 07:56:50.728457 | orchestrator | Tuesday 23 September 2025 07:55:42 +0000 (0:00:02.313) 0:01:35.309 ***** 2025-09-23 07:56:50.728469 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:56:50.728482 | orchestrator | 2025-09-23 07:56:50.728494 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-09-23 07:56:50.728507 | orchestrator | Tuesday 23 September 2025 07:55:44 +0000 (0:00:01.907) 0:01:37.216 ***** 2025-09-23 07:56:50.728519 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:56:50.728547 | orchestrator | 2025-09-23 07:56:50.728560 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-09-23 07:56:50.728576 | orchestrator | Tuesday 23 September 2025 07:56:15 +0000 (0:00:30.560) 0:02:07.777 ***** 2025-09-23 07:56:50.728598 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:56:50.728610 | orchestrator | 2025-09-23 07:56:50.728626 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-23 07:56:50.728638 | orchestrator | Tuesday 23 September 2025 07:56:17 +0000 (0:00:02.218) 0:02:09.996 ***** 2025-09-23 07:56:50.728649 | orchestrator | 2025-09-23 07:56:50.728659 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-23 07:56:50.728670 | orchestrator | Tuesday 23 September 2025 07:56:17 +0000 (0:00:00.057) 0:02:10.053 ***** 2025-09-23 07:56:50.728681 | orchestrator | 2025-09-23 07:56:50.728880 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-23 07:56:50.728892 | orchestrator | Tuesday 23 September 2025 07:56:17 +0000 (0:00:00.061) 0:02:10.114 ***** 2025-09-23 07:56:50.728903 | orchestrator | 2025-09-23 07:56:50.728914 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-09-23 07:56:50.728925 | orchestrator | Tuesday 23 September 2025 07:56:17 +0000 (0:00:00.064) 0:02:10.178 ***** 2025-09-23 07:56:50.728936 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:56:50.728947 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:56:50.728957 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:56:50.728968 | orchestrator | 2025-09-23 07:56:50.728979 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:56:50.728991 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-23 07:56:50.729002 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-23 07:56:50.729013 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-23 07:56:50.729024 | orchestrator | 2025-09-23 07:56:50.729035 | orchestrator | 2025-09-23 07:56:50.729046 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:56:50.729056 | orchestrator | Tuesday 23 September 2025 07:56:50 +0000 (0:00:32.629) 0:02:42.808 ***** 2025-09-23 07:56:50.729067 | orchestrator | =============================================================================== 2025-09-23 07:56:50.729078 | orchestrator | glance : Restart glance-api container ---------------------------------- 32.63s 2025-09-23 07:56:50.729089 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 30.56s 2025-09-23 07:56:50.729109 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.79s 2025-09-23 07:56:50.729120 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.36s 2025-09-23 07:56:50.729131 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 5.21s 2025-09-23 07:56:50.729142 | orchestrator | glance : Check glance containers ---------------------------------------- 4.72s 2025-09-23 07:56:50.729153 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.55s 2025-09-23 07:56:50.729163 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 4.37s 2025-09-23 07:56:50.729174 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.32s 2025-09-23 07:56:50.729185 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.19s 2025-09-23 07:56:50.729196 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 4.19s 2025-09-23 07:56:50.729207 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 4.04s 2025-09-23 07:56:50.729217 | orchestrator | glance : Copying over config.json files for services -------------------- 3.81s 2025-09-23 07:56:50.729228 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.76s 2025-09-23 07:56:50.729239 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.63s 2025-09-23 07:56:50.729249 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.63s 2025-09-23 07:56:50.729260 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.60s 2025-09-23 07:56:50.729271 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.59s 2025-09-23 07:56:50.729282 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.51s 2025-09-23 07:56:50.729292 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.36s 2025-09-23 07:56:50.729303 | orchestrator | 2025-09-23 07:56:50 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:56:50.729314 | orchestrator | 2025-09-23 07:56:50 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:56:53.778593 | orchestrator | 2025-09-23 07:56:53 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:56:53.779023 | orchestrator | 2025-09-23 07:56:53 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:56:53.779672 | orchestrator | 2025-09-23 07:56:53 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:56:53.782237 | orchestrator | 2025-09-23 07:56:53 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:56:53.782261 | orchestrator | 2025-09-23 07:56:53 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:56:56.815943 | orchestrator | 2025-09-23 07:56:56 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:56:56.816549 | orchestrator | 2025-09-23 07:56:56 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:56:56.817839 | orchestrator | 2025-09-23 07:56:56 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:56:56.822415 | orchestrator | 2025-09-23 07:56:56 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:56:56.822459 | orchestrator | 2025-09-23 07:56:56 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:56:59.869340 | orchestrator | 2025-09-23 07:56:59 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:56:59.871602 | orchestrator | 2025-09-23 07:56:59 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:56:59.872756 | orchestrator | 2025-09-23 07:56:59 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:56:59.874178 | orchestrator | 2025-09-23 07:56:59 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:56:59.874206 | orchestrator | 2025-09-23 07:56:59 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:57:02.917065 | orchestrator | 2025-09-23 07:57:02 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:57:02.917905 | orchestrator | 2025-09-23 07:57:02 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:57:02.918852 | orchestrator | 2025-09-23 07:57:02 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:57:02.921314 | orchestrator | 2025-09-23 07:57:02 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:57:02.921348 | orchestrator | 2025-09-23 07:57:02 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:57:05.965540 | orchestrator | 2025-09-23 07:57:05 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:57:05.966469 | orchestrator | 2025-09-23 07:57:05 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:57:05.967474 | orchestrator | 2025-09-23 07:57:05 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:57:05.969059 | orchestrator | 2025-09-23 07:57:05 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:57:05.969724 | orchestrator | 2025-09-23 07:57:05 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:57:09.039627 | orchestrator | 2025-09-23 07:57:09 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:57:09.040919 | orchestrator | 2025-09-23 07:57:09 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:57:09.042504 | orchestrator | 2025-09-23 07:57:09 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:57:09.044778 | orchestrator | 2025-09-23 07:57:09 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:57:09.044833 | orchestrator | 2025-09-23 07:57:09 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:57:12.098974 | orchestrator | 2025-09-23 07:57:12 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:57:12.099641 | orchestrator | 2025-09-23 07:57:12 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:57:12.099937 | orchestrator | 2025-09-23 07:57:12 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:57:12.104835 | orchestrator | 2025-09-23 07:57:12 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:57:12.104900 | orchestrator | 2025-09-23 07:57:12 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:57:15.147469 | orchestrator | 2025-09-23 07:57:15 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:57:15.149434 | orchestrator | 2025-09-23 07:57:15 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:57:15.151718 | orchestrator | 2025-09-23 07:57:15 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:57:15.152409 | orchestrator | 2025-09-23 07:57:15 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:57:15.152442 | orchestrator | 2025-09-23 07:57:15 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:57:18.196001 | orchestrator | 2025-09-23 07:57:18 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:57:18.196113 | orchestrator | 2025-09-23 07:57:18 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:57:18.196798 | orchestrator | 2025-09-23 07:57:18 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:57:18.198112 | orchestrator | 2025-09-23 07:57:18 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:57:18.198197 | orchestrator | 2025-09-23 07:57:18 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:57:21.238640 | orchestrator | 2025-09-23 07:57:21 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:57:21.238789 | orchestrator | 2025-09-23 07:57:21 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:57:21.239631 | orchestrator | 2025-09-23 07:57:21 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:57:21.240638 | orchestrator | 2025-09-23 07:57:21 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:57:21.240701 | orchestrator | 2025-09-23 07:57:21 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:57:24.291979 | orchestrator | 2025-09-23 07:57:24 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:57:24.292861 | orchestrator | 2025-09-23 07:57:24 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:57:24.294305 | orchestrator | 2025-09-23 07:57:24 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:57:24.295966 | orchestrator | 2025-09-23 07:57:24 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:57:24.296002 | orchestrator | 2025-09-23 07:57:24 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:57:27.340120 | orchestrator | 2025-09-23 07:57:27 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:57:27.342242 | orchestrator | 2025-09-23 07:57:27 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:57:27.343274 | orchestrator | 2025-09-23 07:57:27 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state STARTED 2025-09-23 07:57:27.344542 | orchestrator | 2025-09-23 07:57:27 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:57:27.344571 | orchestrator | 2025-09-23 07:57:27 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:57:30.395564 | orchestrator | 2025-09-23 07:57:30 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:57:30.398342 | orchestrator | 2025-09-23 07:57:30 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:57:30.400345 | orchestrator | 2025-09-23 07:57:30 | INFO  | Task 7a257bc7-7a86-487f-b949-8d3e07abc091 is in state STARTED 2025-09-23 07:57:30.404437 | orchestrator | 2025-09-23 07:57:30 | INFO  | Task 775c7f45-35d3-408f-b5b2-2e1cb458fcbb is in state SUCCESS 2025-09-23 07:57:30.406708 | orchestrator | 2025-09-23 07:57:30.406747 | orchestrator | 2025-09-23 07:57:30.406760 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-23 07:57:30.406772 | orchestrator | 2025-09-23 07:57:30.406784 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-23 07:57:30.406795 | orchestrator | Tuesday 23 September 2025 07:54:19 +0000 (0:00:00.262) 0:00:00.262 ***** 2025-09-23 07:57:30.406806 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:57:30.406819 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:57:30.406830 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:57:30.406840 | orchestrator | ok: [testbed-node-3] 2025-09-23 07:57:30.406851 | orchestrator | ok: [testbed-node-4] 2025-09-23 07:57:30.406862 | orchestrator | ok: [testbed-node-5] 2025-09-23 07:57:30.406900 | orchestrator | 2025-09-23 07:57:30.406971 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-23 07:57:30.406986 | orchestrator | Tuesday 23 September 2025 07:54:19 +0000 (0:00:00.670) 0:00:00.932 ***** 2025-09-23 07:57:30.406997 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-09-23 07:57:30.407008 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-09-23 07:57:30.407020 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-09-23 07:57:30.407031 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-09-23 07:57:30.407041 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-09-23 07:57:30.407052 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-09-23 07:57:30.407063 | orchestrator | 2025-09-23 07:57:30.407073 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-09-23 07:57:30.407084 | orchestrator | 2025-09-23 07:57:30.407095 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-23 07:57:30.407106 | orchestrator | Tuesday 23 September 2025 07:54:20 +0000 (0:00:00.579) 0:00:01.512 ***** 2025-09-23 07:57:30.407117 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:57:30.407130 | orchestrator | 2025-09-23 07:57:30.407156 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-09-23 07:57:30.407167 | orchestrator | Tuesday 23 September 2025 07:54:22 +0000 (0:00:01.586) 0:00:03.099 ***** 2025-09-23 07:57:30.407179 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-09-23 07:57:30.407189 | orchestrator | 2025-09-23 07:57:30.407200 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-09-23 07:57:30.407211 | orchestrator | Tuesday 23 September 2025 07:54:25 +0000 (0:00:03.625) 0:00:06.725 ***** 2025-09-23 07:57:30.407222 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-09-23 07:57:30.407370 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-09-23 07:57:30.407385 | orchestrator | 2025-09-23 07:57:30.407398 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-09-23 07:57:30.407410 | orchestrator | Tuesday 23 September 2025 07:54:32 +0000 (0:00:06.901) 0:00:13.626 ***** 2025-09-23 07:57:30.407423 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-23 07:57:30.407435 | orchestrator | 2025-09-23 07:57:30.407447 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-09-23 07:57:30.407459 | orchestrator | Tuesday 23 September 2025 07:54:36 +0000 (0:00:03.533) 0:00:17.160 ***** 2025-09-23 07:57:30.407471 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-23 07:57:30.407483 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-09-23 07:57:30.407495 | orchestrator | 2025-09-23 07:57:30.407508 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-09-23 07:57:30.407521 | orchestrator | Tuesday 23 September 2025 07:54:40 +0000 (0:00:04.094) 0:00:21.255 ***** 2025-09-23 07:57:30.407534 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-23 07:57:30.407546 | orchestrator | 2025-09-23 07:57:30.407559 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-09-23 07:57:30.407571 | orchestrator | Tuesday 23 September 2025 07:54:43 +0000 (0:00:03.052) 0:00:24.307 ***** 2025-09-23 07:57:30.407583 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-09-23 07:57:30.407596 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-09-23 07:57:30.407623 | orchestrator | 2025-09-23 07:57:30.407653 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-09-23 07:57:30.407665 | orchestrator | Tuesday 23 September 2025 07:54:51 +0000 (0:00:08.245) 0:00:32.553 ***** 2025-09-23 07:57:30.407679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-23 07:57:30.407725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-23 07:57:30.407745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-23 07:57:30.407757 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.407769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.407822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.407844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.407856 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.407874 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.407886 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.407897 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.407916 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.407928 | orchestrator | 2025-09-23 07:57:30.407944 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-23 07:57:30.407955 | orchestrator | Tuesday 23 September 2025 07:54:54 +0000 (0:00:03.343) 0:00:35.896 ***** 2025-09-23 07:57:30.407966 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:57:30.407977 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:57:30.407988 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:57:30.407998 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:57:30.408009 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:57:30.408019 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:57:30.408030 | orchestrator | 2025-09-23 07:57:30.408040 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-23 07:57:30.408051 | orchestrator | Tuesday 23 September 2025 07:54:55 +0000 (0:00:01.113) 0:00:37.010 ***** 2025-09-23 07:57:30.408062 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:57:30.408072 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:57:30.408083 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:57:30.408094 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:57:30.408104 | orchestrator | 2025-09-23 07:57:30.408115 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-09-23 07:57:30.408125 | orchestrator | Tuesday 23 September 2025 07:54:57 +0000 (0:00:01.134) 0:00:38.144 ***** 2025-09-23 07:57:30.408136 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-09-23 07:57:30.408146 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-09-23 07:57:30.408157 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-09-23 07:57:30.408167 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-09-23 07:57:30.408178 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-09-23 07:57:30.408188 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-09-23 07:57:30.408199 | orchestrator | 2025-09-23 07:57:30.408210 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-09-23 07:57:30.408225 | orchestrator | Tuesday 23 September 2025 07:54:59 +0000 (0:00:02.102) 0:00:40.247 ***** 2025-09-23 07:57:30.408237 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-23 07:57:30.408257 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-23 07:57:30.408281 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-23 07:57:30.408300 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-23 07:57:30.408326 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-23 07:57:30.408339 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-23 07:57:30.408357 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-23 07:57:30.408369 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-23 07:57:30.408387 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-23 07:57:30.408413 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-23 07:57:30.408425 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-23 07:57:30.408443 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-23 07:57:30.408454 | orchestrator | 2025-09-23 07:57:30.408465 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-09-23 07:57:30.408476 | orchestrator | Tuesday 23 September 2025 07:55:02 +0000 (0:00:03.569) 0:00:43.816 ***** 2025-09-23 07:57:30.408486 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-23 07:57:30.408498 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-23 07:57:30.408509 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-23 07:57:30.408519 | orchestrator | 2025-09-23 07:57:30.408530 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-09-23 07:57:30.408541 | orchestrator | Tuesday 23 September 2025 07:55:04 +0000 (0:00:02.112) 0:00:45.929 ***** 2025-09-23 07:57:30.408551 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-09-23 07:57:30.408562 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-09-23 07:57:30.408573 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-09-23 07:57:30.408583 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-09-23 07:57:30.408594 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-09-23 07:57:30.408610 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-09-23 07:57:30.408621 | orchestrator | 2025-09-23 07:57:30.408632 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-09-23 07:57:30.408660 | orchestrator | Tuesday 23 September 2025 07:55:07 +0000 (0:00:03.003) 0:00:48.932 ***** 2025-09-23 07:57:30.408671 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-09-23 07:57:30.408682 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-09-23 07:57:30.408693 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-09-23 07:57:30.408704 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-09-23 07:57:30.408715 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-09-23 07:57:30.408726 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-09-23 07:57:30.408737 | orchestrator | 2025-09-23 07:57:30.408747 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-09-23 07:57:30.408772 | orchestrator | Tuesday 23 September 2025 07:55:08 +0000 (0:00:01.006) 0:00:49.939 ***** 2025-09-23 07:57:30.408783 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:57:30.408794 | orchestrator | 2025-09-23 07:57:30.408816 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-09-23 07:57:30.408827 | orchestrator | Tuesday 23 September 2025 07:55:08 +0000 (0:00:00.113) 0:00:50.052 ***** 2025-09-23 07:57:30.408844 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:57:30.408858 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:57:30.408876 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:57:30.408892 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:57:30.408910 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:57:30.408929 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:57:30.408946 | orchestrator | 2025-09-23 07:57:30.408964 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-23 07:57:30.408979 | orchestrator | Tuesday 23 September 2025 07:55:09 +0000 (0:00:00.696) 0:00:50.748 ***** 2025-09-23 07:57:30.408997 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 07:57:30.409009 | orchestrator | 2025-09-23 07:57:30.409020 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-09-23 07:57:30.409031 | orchestrator | Tuesday 23 September 2025 07:55:10 +0000 (0:00:01.164) 0:00:51.913 ***** 2025-09-23 07:57:30.409042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-23 07:57:30.409055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-23 07:57:30.409074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-23 07:57:30.409086 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.409109 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.409121 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.409132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.409144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.409715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.409749 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.409769 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.409781 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.409791 | orchestrator | 2025-09-23 07:57:30.409802 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-09-23 07:57:30.409813 | orchestrator | Tuesday 23 September 2025 07:55:14 +0000 (0:00:03.338) 0:00:55.252 ***** 2025-09-23 07:57:30.409825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-23 07:57:30.409844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-23 07:57:30.409862 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:57:30.409873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-23 07:57:30.409890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-23 07:57:30.409902 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-23 07:57:30.409913 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-23 07:57:30.409924 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:57:30.409935 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:57:30.409946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-23 07:57:30.409970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-23 07:57:30.409981 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:57:30.409997 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-23 07:57:30.410009 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-23 07:57:30.410073 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:57:30.410088 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-23 07:57:30.410101 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-23 07:57:30.410124 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:57:30.410135 | orchestrator | 2025-09-23 07:57:30.410147 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-09-23 07:57:30.410158 | orchestrator | Tuesday 23 September 2025 07:55:16 +0000 (0:00:02.160) 0:00:57.412 ***** 2025-09-23 07:57:30.410179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-23 07:57:30.410196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-23 07:57:30.410208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-23 07:57:30.410220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-23 07:57:30.410232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-23 07:57:30.410258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-23 07:57:30.410271 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:57:30.410284 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:57:30.410297 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:57:30.410311 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-23 07:57:30.410333 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-23 07:57:30.410347 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:57:30.410360 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-23 07:57:30.410373 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-23 07:57:30.410393 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:57:30.410414 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-23 07:57:30.410428 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-23 07:57:30.410440 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:57:30.410453 | orchestrator | 2025-09-23 07:57:30.410466 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-09-23 07:57:30.410479 | orchestrator | Tuesday 23 September 2025 07:55:17 +0000 (0:00:01.640) 0:00:59.053 ***** 2025-09-23 07:57:30.410497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-23 07:57:30.410511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-23 07:57:30.410533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-23 07:57:30.410554 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.410568 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.410586 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.410601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.410621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.410658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.410678 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.410690 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.410706 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.410717 | orchestrator | 2025-09-23 07:57:30.410728 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-09-23 07:57:30.410738 | orchestrator | Tuesday 23 September 2025 07:55:21 +0000 (0:00:03.058) 0:01:02.111 ***** 2025-09-23 07:57:30.410749 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-23 07:57:30.410760 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:57:30.410771 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-23 07:57:30.410789 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:57:30.410800 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-23 07:57:30.410810 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:57:30.410821 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-23 07:57:30.410832 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-23 07:57:30.410843 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-23 07:57:30.410853 | orchestrator | 2025-09-23 07:57:30.410864 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-09-23 07:57:30.410874 | orchestrator | Tuesday 23 September 2025 07:55:23 +0000 (0:00:02.173) 0:01:04.285 ***** 2025-09-23 07:57:30.410885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-23 07:57:30.410903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-23 07:57:30.410919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-23 07:57:30.410931 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.410949 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.410965 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.410977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.410988 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.411004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.411022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.411033 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.411044 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.411055 | orchestrator | 2025-09-23 07:57:30.411066 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-09-23 07:57:30.411077 | orchestrator | Tuesday 23 September 2025 07:55:32 +0000 (0:00:09.464) 0:01:13.750 ***** 2025-09-23 07:57:30.411093 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:57:30.411104 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:57:30.411115 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:57:30.411125 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:57:30.411136 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:57:30.411146 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:57:30.411157 | orchestrator | 2025-09-23 07:57:30.411167 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-09-23 07:57:30.411178 | orchestrator | Tuesday 23 September 2025 07:55:34 +0000 (0:00:02.154) 0:01:15.905 ***** 2025-09-23 07:57:30.411190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-23 07:57:30.411206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-23 07:57:30.411224 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:57:30.411235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-23 07:57:30.411246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-23 07:57:30.411258 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:57:30.411274 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-23 07:57:30.411286 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-23 07:57:30.411297 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:57:30.411313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-23 07:57:30.411335 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-23 07:57:30.411346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-23 07:57:30.411358 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-23 07:57:30.411369 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:57:30.411380 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:57:30.411397 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-23 07:57:30.411414 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-23 07:57:30.411431 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:57:30.411442 | orchestrator | 2025-09-23 07:57:30.411453 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-09-23 07:57:30.411463 | orchestrator | Tuesday 23 September 2025 07:55:36 +0000 (0:00:01.505) 0:01:17.411 ***** 2025-09-23 07:57:30.411474 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:57:30.411484 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:57:30.411495 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:57:30.411506 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:57:30.411516 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:57:30.411527 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:57:30.411537 | orchestrator | 2025-09-23 07:57:30.411548 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-09-23 07:57:30.411559 | orchestrator | Tuesday 23 September 2025 07:55:37 +0000 (0:00:00.682) 0:01:18.093 ***** 2025-09-23 07:57:30.411570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-23 07:57:30.411581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-23 07:57:30.411600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-23 07:57:30.411622 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.411651 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.411664 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.411675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.411693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.411705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.411728 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.411739 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.411750 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-23 07:57:30.411761 | orchestrator | 2025-09-23 07:57:30.411772 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-23 07:57:30.411783 | orchestrator | Tuesday 23 September 2025 07:55:39 +0000 (0:00:02.731) 0:01:20.824 ***** 2025-09-23 07:57:30.411794 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:57:30.411805 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:57:30.411815 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:57:30.411826 | orchestrator | skipping: [testbed-node-3] 2025-09-23 07:57:30.411836 | orchestrator | skipping: [testbed-node-4] 2025-09-23 07:57:30.411847 | orchestrator | skipping: [testbed-node-5] 2025-09-23 07:57:30.411857 | orchestrator | 2025-09-23 07:57:30.411868 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-09-23 07:57:30.411879 | orchestrator | Tuesday 23 September 2025 07:55:40 +0000 (0:00:00.539) 0:01:21.364 ***** 2025-09-23 07:57:30.411890 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:57:30.411900 | orchestrator | 2025-09-23 07:57:30.411911 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-09-23 07:57:30.411921 | orchestrator | Tuesday 23 September 2025 07:55:42 +0000 (0:00:02.588) 0:01:23.952 ***** 2025-09-23 07:57:30.411933 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:57:30.411952 | orchestrator | 2025-09-23 07:57:30.411964 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-09-23 07:57:30.411982 | orchestrator | Tuesday 23 September 2025 07:55:44 +0000 (0:00:02.059) 0:01:26.012 ***** 2025-09-23 07:57:30.411992 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:57:30.412003 | orchestrator | 2025-09-23 07:57:30.412013 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-23 07:57:30.412024 | orchestrator | Tuesday 23 September 2025 07:56:06 +0000 (0:00:22.056) 0:01:48.069 ***** 2025-09-23 07:57:30.412034 | orchestrator | 2025-09-23 07:57:30.412051 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-23 07:57:30.412062 | orchestrator | Tuesday 23 September 2025 07:56:07 +0000 (0:00:00.062) 0:01:48.132 ***** 2025-09-23 07:57:30.412073 | orchestrator | 2025-09-23 07:57:30.412083 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-23 07:57:30.412094 | orchestrator | Tuesday 23 September 2025 07:56:07 +0000 (0:00:00.062) 0:01:48.195 ***** 2025-09-23 07:57:30.412104 | orchestrator | 2025-09-23 07:57:30.412115 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-23 07:57:30.412125 | orchestrator | Tuesday 23 September 2025 07:56:07 +0000 (0:00:00.063) 0:01:48.258 ***** 2025-09-23 07:57:30.412136 | orchestrator | 2025-09-23 07:57:30.412146 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-23 07:57:30.412157 | orchestrator | Tuesday 23 September 2025 07:56:07 +0000 (0:00:00.060) 0:01:48.318 ***** 2025-09-23 07:57:30.412168 | orchestrator | 2025-09-23 07:57:30.412178 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-23 07:57:30.412189 | orchestrator | Tuesday 23 September 2025 07:56:07 +0000 (0:00:00.060) 0:01:48.379 ***** 2025-09-23 07:57:30.412199 | orchestrator | 2025-09-23 07:57:30.412210 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-09-23 07:57:30.412220 | orchestrator | Tuesday 23 September 2025 07:56:07 +0000 (0:00:00.061) 0:01:48.441 ***** 2025-09-23 07:57:30.412231 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:57:30.412242 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:57:30.412252 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:57:30.412263 | orchestrator | 2025-09-23 07:57:30.412274 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-09-23 07:57:30.412284 | orchestrator | Tuesday 23 September 2025 07:56:28 +0000 (0:00:21.581) 0:02:10.022 ***** 2025-09-23 07:57:30.412295 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:57:30.412306 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:57:30.412316 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:57:30.412327 | orchestrator | 2025-09-23 07:57:30.412342 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-09-23 07:57:30.412354 | orchestrator | Tuesday 23 September 2025 07:56:35 +0000 (0:00:06.451) 0:02:16.473 ***** 2025-09-23 07:57:30.412364 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:57:30.412375 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:57:30.412385 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:57:30.412396 | orchestrator | 2025-09-23 07:57:30.412407 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-09-23 07:57:30.412417 | orchestrator | Tuesday 23 September 2025 07:57:17 +0000 (0:00:41.763) 0:02:58.237 ***** 2025-09-23 07:57:30.412428 | orchestrator | changed: [testbed-node-3] 2025-09-23 07:57:30.412439 | orchestrator | changed: [testbed-node-4] 2025-09-23 07:57:30.412449 | orchestrator | changed: [testbed-node-5] 2025-09-23 07:57:30.412460 | orchestrator | 2025-09-23 07:57:30.412471 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-09-23 07:57:30.412481 | orchestrator | Tuesday 23 September 2025 07:57:27 +0000 (0:00:10.684) 0:03:08.921 ***** 2025-09-23 07:57:30.412492 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:57:30.412503 | orchestrator | 2025-09-23 07:57:30.412513 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:57:30.412524 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-23 07:57:30.412542 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-23 07:57:30.412553 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-23 07:57:30.412564 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-23 07:57:30.412575 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-23 07:57:30.412585 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-23 07:57:30.412596 | orchestrator | 2025-09-23 07:57:30.412607 | orchestrator | 2025-09-23 07:57:30.412617 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:57:30.412628 | orchestrator | Tuesday 23 September 2025 07:57:28 +0000 (0:00:00.635) 0:03:09.557 ***** 2025-09-23 07:57:30.412698 | orchestrator | =============================================================================== 2025-09-23 07:57:30.412712 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 41.76s 2025-09-23 07:57:30.412723 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 22.06s 2025-09-23 07:57:30.412734 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 21.58s 2025-09-23 07:57:30.412745 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.68s 2025-09-23 07:57:30.412756 | orchestrator | cinder : Copying over cinder.conf --------------------------------------- 9.47s 2025-09-23 07:57:30.412768 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.25s 2025-09-23 07:57:30.412779 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.90s 2025-09-23 07:57:30.412790 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 6.45s 2025-09-23 07:57:30.412852 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.09s 2025-09-23 07:57:30.412865 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.63s 2025-09-23 07:57:30.412876 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.57s 2025-09-23 07:57:30.412887 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.53s 2025-09-23 07:57:30.412898 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.34s 2025-09-23 07:57:30.412909 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.34s 2025-09-23 07:57:30.412920 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.06s 2025-09-23 07:57:30.412930 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.05s 2025-09-23 07:57:30.412941 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.00s 2025-09-23 07:57:30.412952 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.73s 2025-09-23 07:57:30.412963 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.59s 2025-09-23 07:57:30.412973 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.17s 2025-09-23 07:57:30.412984 | orchestrator | 2025-09-23 07:57:30 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:57:30.412995 | orchestrator | 2025-09-23 07:57:30 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:57:33.457239 | orchestrator | 2025-09-23 07:57:33 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:57:33.457855 | orchestrator | 2025-09-23 07:57:33 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:57:33.458805 | orchestrator | 2025-09-23 07:57:33 | INFO  | Task 7a257bc7-7a86-487f-b949-8d3e07abc091 is in state STARTED 2025-09-23 07:57:33.459824 | orchestrator | 2025-09-23 07:57:33 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:57:33.459860 | orchestrator | 2025-09-23 07:57:33 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:57:36.507067 | orchestrator | 2025-09-23 07:57:36 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state STARTED 2025-09-23 07:57:36.508195 | orchestrator | 2025-09-23 07:57:36 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:57:36.510914 | orchestrator | 2025-09-23 07:57:36 | INFO  | Task 7a257bc7-7a86-487f-b949-8d3e07abc091 is in state STARTED 2025-09-23 07:57:36.513822 | orchestrator | 2025-09-23 07:57:36 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:57:36.513878 | orchestrator | 2025-09-23 07:57:36 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:57:39.551391 | orchestrator | 2025-09-23 07:57:39 | INFO  | Task d86ee845-e797-4856-9c1f-77972a9559be is in state SUCCESS 2025-09-23 07:57:39.551902 | orchestrator | 2025-09-23 07:57:39 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:57:39.553443 | orchestrator | 2025-09-23 07:57:39 | INFO  | Task 7a257bc7-7a86-487f-b949-8d3e07abc091 is in state STARTED 2025-09-23 07:57:39.554954 | orchestrator | 2025-09-23 07:57:39 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:57:39.555052 | orchestrator | 2025-09-23 07:57:39 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:57:42.607934 | orchestrator | 2025-09-23 07:57:42 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:57:42.613321 | orchestrator | 2025-09-23 07:57:42 | INFO  | Task 7a257bc7-7a86-487f-b949-8d3e07abc091 is in state STARTED 2025-09-23 07:57:42.616613 | orchestrator | 2025-09-23 07:57:42 | INFO  | Task 6687c471-7e3d-4029-a156-87c33e5aa0c7 is in state STARTED 2025-09-23 07:57:42.618865 | orchestrator | 2025-09-23 07:57:42 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:57:42.619193 | orchestrator | 2025-09-23 07:57:42 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:57:45.664247 | orchestrator | 2025-09-23 07:57:45 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:57:45.667353 | orchestrator | 2025-09-23 07:57:45 | INFO  | Task 7a257bc7-7a86-487f-b949-8d3e07abc091 is in state STARTED 2025-09-23 07:57:45.671462 | orchestrator | 2025-09-23 07:57:45 | INFO  | Task 6687c471-7e3d-4029-a156-87c33e5aa0c7 is in state STARTED 2025-09-23 07:57:45.675000 | orchestrator | 2025-09-23 07:57:45 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:57:45.676266 | orchestrator | 2025-09-23 07:57:45 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:57:48.724605 | orchestrator | 2025-09-23 07:57:48 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:57:48.726922 | orchestrator | 2025-09-23 07:57:48 | INFO  | Task 7a257bc7-7a86-487f-b949-8d3e07abc091 is in state STARTED 2025-09-23 07:57:48.729060 | orchestrator | 2025-09-23 07:57:48 | INFO  | Task 6687c471-7e3d-4029-a156-87c33e5aa0c7 is in state STARTED 2025-09-23 07:57:48.731189 | orchestrator | 2025-09-23 07:57:48 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:57:48.731218 | orchestrator | 2025-09-23 07:57:48 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:57:51.776641 | orchestrator | 2025-09-23 07:57:51 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:57:51.777770 | orchestrator | 2025-09-23 07:57:51 | INFO  | Task 7a257bc7-7a86-487f-b949-8d3e07abc091 is in state STARTED 2025-09-23 07:57:51.779485 | orchestrator | 2025-09-23 07:57:51 | INFO  | Task 6687c471-7e3d-4029-a156-87c33e5aa0c7 is in state STARTED 2025-09-23 07:57:51.781582 | orchestrator | 2025-09-23 07:57:51 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:57:51.781652 | orchestrator | 2025-09-23 07:57:51 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:57:54.815445 | orchestrator | 2025-09-23 07:57:54 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:57:54.816818 | orchestrator | 2025-09-23 07:57:54 | INFO  | Task 7a257bc7-7a86-487f-b949-8d3e07abc091 is in state STARTED 2025-09-23 07:57:54.818206 | orchestrator | 2025-09-23 07:57:54 | INFO  | Task 6687c471-7e3d-4029-a156-87c33e5aa0c7 is in state STARTED 2025-09-23 07:57:54.819507 | orchestrator | 2025-09-23 07:57:54 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:57:54.819555 | orchestrator | 2025-09-23 07:57:54 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:57:57.860343 | orchestrator | 2025-09-23 07:57:57 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:57:57.861091 | orchestrator | 2025-09-23 07:57:57 | INFO  | Task 7a257bc7-7a86-487f-b949-8d3e07abc091 is in state STARTED 2025-09-23 07:57:57.862699 | orchestrator | 2025-09-23 07:57:57 | INFO  | Task 6687c471-7e3d-4029-a156-87c33e5aa0c7 is in state STARTED 2025-09-23 07:57:57.864376 | orchestrator | 2025-09-23 07:57:57 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:57:57.864406 | orchestrator | 2025-09-23 07:57:57 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:58:00.907548 | orchestrator | 2025-09-23 07:58:00 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:58:00.909826 | orchestrator | 2025-09-23 07:58:00 | INFO  | Task 7a257bc7-7a86-487f-b949-8d3e07abc091 is in state STARTED 2025-09-23 07:58:00.912252 | orchestrator | 2025-09-23 07:58:00 | INFO  | Task 6687c471-7e3d-4029-a156-87c33e5aa0c7 is in state STARTED 2025-09-23 07:58:00.914158 | orchestrator | 2025-09-23 07:58:00 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:58:00.914212 | orchestrator | 2025-09-23 07:58:00 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:58:03.950182 | orchestrator | 2025-09-23 07:58:03 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:58:03.952131 | orchestrator | 2025-09-23 07:58:03 | INFO  | Task 7a257bc7-7a86-487f-b949-8d3e07abc091 is in state STARTED 2025-09-23 07:58:03.952699 | orchestrator | 2025-09-23 07:58:03 | INFO  | Task 6687c471-7e3d-4029-a156-87c33e5aa0c7 is in state STARTED 2025-09-23 07:58:03.954148 | orchestrator | 2025-09-23 07:58:03 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:58:03.954355 | orchestrator | 2025-09-23 07:58:03 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:58:06.996053 | orchestrator | 2025-09-23 07:58:06 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:58:06.998632 | orchestrator | 2025-09-23 07:58:06 | INFO  | Task 7a257bc7-7a86-487f-b949-8d3e07abc091 is in state STARTED 2025-09-23 07:58:07.000788 | orchestrator | 2025-09-23 07:58:07 | INFO  | Task 6687c471-7e3d-4029-a156-87c33e5aa0c7 is in state STARTED 2025-09-23 07:58:07.003679 | orchestrator | 2025-09-23 07:58:07 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:58:07.003756 | orchestrator | 2025-09-23 07:58:07 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:58:10.043744 | orchestrator | 2025-09-23 07:58:10 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:58:10.046258 | orchestrator | 2025-09-23 07:58:10 | INFO  | Task 7a257bc7-7a86-487f-b949-8d3e07abc091 is in state STARTED 2025-09-23 07:58:10.048888 | orchestrator | 2025-09-23 07:58:10 | INFO  | Task 6687c471-7e3d-4029-a156-87c33e5aa0c7 is in state STARTED 2025-09-23 07:58:10.050881 | orchestrator | 2025-09-23 07:58:10 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:58:10.051024 | orchestrator | 2025-09-23 07:58:10 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:58:13.092663 | orchestrator | 2025-09-23 07:58:13 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:58:13.092745 | orchestrator | 2025-09-23 07:58:13 | INFO  | Task 7a257bc7-7a86-487f-b949-8d3e07abc091 is in state STARTED 2025-09-23 07:58:13.093549 | orchestrator | 2025-09-23 07:58:13 | INFO  | Task 6687c471-7e3d-4029-a156-87c33e5aa0c7 is in state STARTED 2025-09-23 07:58:13.094493 | orchestrator | 2025-09-23 07:58:13 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:58:13.094524 | orchestrator | 2025-09-23 07:58:13 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:58:16.135405 | orchestrator | 2025-09-23 07:58:16 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:58:16.136971 | orchestrator | 2025-09-23 07:58:16 | INFO  | Task 7a257bc7-7a86-487f-b949-8d3e07abc091 is in state STARTED 2025-09-23 07:58:16.140411 | orchestrator | 2025-09-23 07:58:16 | INFO  | Task 6687c471-7e3d-4029-a156-87c33e5aa0c7 is in state STARTED 2025-09-23 07:58:16.142949 | orchestrator | 2025-09-23 07:58:16 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:58:16.143395 | orchestrator | 2025-09-23 07:58:16 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:58:19.183838 | orchestrator | 2025-09-23 07:58:19 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:58:19.186007 | orchestrator | 2025-09-23 07:58:19 | INFO  | Task 7a257bc7-7a86-487f-b949-8d3e07abc091 is in state STARTED 2025-09-23 07:58:19.187699 | orchestrator | 2025-09-23 07:58:19 | INFO  | Task 6687c471-7e3d-4029-a156-87c33e5aa0c7 is in state STARTED 2025-09-23 07:58:19.189449 | orchestrator | 2025-09-23 07:58:19 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:58:19.189489 | orchestrator | 2025-09-23 07:58:19 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:58:22.237552 | orchestrator | 2025-09-23 07:58:22 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:58:22.239077 | orchestrator | 2025-09-23 07:58:22 | INFO  | Task 7a257bc7-7a86-487f-b949-8d3e07abc091 is in state STARTED 2025-09-23 07:58:22.241549 | orchestrator | 2025-09-23 07:58:22 | INFO  | Task 6687c471-7e3d-4029-a156-87c33e5aa0c7 is in state STARTED 2025-09-23 07:58:22.244393 | orchestrator | 2025-09-23 07:58:22 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:58:22.244535 | orchestrator | 2025-09-23 07:58:22 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:58:25.291692 | orchestrator | 2025-09-23 07:58:25 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:58:25.293860 | orchestrator | 2025-09-23 07:58:25 | INFO  | Task 7a257bc7-7a86-487f-b949-8d3e07abc091 is in state STARTED 2025-09-23 07:58:25.296702 | orchestrator | 2025-09-23 07:58:25 | INFO  | Task 6687c471-7e3d-4029-a156-87c33e5aa0c7 is in state STARTED 2025-09-23 07:58:25.299830 | orchestrator | 2025-09-23 07:58:25 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:58:25.299863 | orchestrator | 2025-09-23 07:58:25 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:58:28.344139 | orchestrator | 2025-09-23 07:58:28 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:58:28.348714 | orchestrator | 2025-09-23 07:58:28 | INFO  | Task 7a257bc7-7a86-487f-b949-8d3e07abc091 is in state STARTED 2025-09-23 07:58:28.351316 | orchestrator | 2025-09-23 07:58:28 | INFO  | Task 6687c471-7e3d-4029-a156-87c33e5aa0c7 is in state STARTED 2025-09-23 07:58:28.353102 | orchestrator | 2025-09-23 07:58:28 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:58:28.353361 | orchestrator | 2025-09-23 07:58:28 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:58:31.397676 | orchestrator | 2025-09-23 07:58:31 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:58:31.398820 | orchestrator | 2025-09-23 07:58:31 | INFO  | Task 7a257bc7-7a86-487f-b949-8d3e07abc091 is in state STARTED 2025-09-23 07:58:31.400326 | orchestrator | 2025-09-23 07:58:31 | INFO  | Task 6687c471-7e3d-4029-a156-87c33e5aa0c7 is in state STARTED 2025-09-23 07:58:31.401757 | orchestrator | 2025-09-23 07:58:31 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:58:31.401786 | orchestrator | 2025-09-23 07:58:31 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:58:34.443389 | orchestrator | 2025-09-23 07:58:34 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:58:34.445011 | orchestrator | 2025-09-23 07:58:34 | INFO  | Task 7a257bc7-7a86-487f-b949-8d3e07abc091 is in state STARTED 2025-09-23 07:58:34.449483 | orchestrator | 2025-09-23 07:58:34 | INFO  | Task 6687c471-7e3d-4029-a156-87c33e5aa0c7 is in state STARTED 2025-09-23 07:58:34.451088 | orchestrator | 2025-09-23 07:58:34 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:58:34.451127 | orchestrator | 2025-09-23 07:58:34 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:58:37.486835 | orchestrator | 2025-09-23 07:58:37 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:58:37.488016 | orchestrator | 2025-09-23 07:58:37 | INFO  | Task 7a257bc7-7a86-487f-b949-8d3e07abc091 is in state STARTED 2025-09-23 07:58:37.489396 | orchestrator | 2025-09-23 07:58:37 | INFO  | Task 6687c471-7e3d-4029-a156-87c33e5aa0c7 is in state STARTED 2025-09-23 07:58:37.490618 | orchestrator | 2025-09-23 07:58:37 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:58:37.491179 | orchestrator | 2025-09-23 07:58:37 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:58:40.530142 | orchestrator | 2025-09-23 07:58:40 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:58:40.530587 | orchestrator | 2025-09-23 07:58:40 | INFO  | Task 7a257bc7-7a86-487f-b949-8d3e07abc091 is in state STARTED 2025-09-23 07:58:40.531993 | orchestrator | 2025-09-23 07:58:40.532033 | orchestrator | 2025-09-23 07:58:40.532055 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-09-23 07:58:40.532075 | orchestrator | 2025-09-23 07:58:40.532094 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-09-23 07:58:40.532107 | orchestrator | Tuesday 23 September 2025 07:51:28 +0000 (0:00:00.173) 0:00:00.173 ***** 2025-09-23 07:58:40.532143 | orchestrator | changed: [localhost] 2025-09-23 07:58:40.532155 | orchestrator | 2025-09-23 07:58:40.532166 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-09-23 07:58:40.532176 | orchestrator | Tuesday 23 September 2025 07:51:29 +0000 (0:00:00.918) 0:00:01.091 ***** 2025-09-23 07:58:40.532187 | orchestrator | 2025-09-23 07:58:40.532198 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-23 07:58:40.532208 | orchestrator | 2025-09-23 07:58:40.532219 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-23 07:58:40.532230 | orchestrator | 2025-09-23 07:58:40.532240 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-23 07:58:40.532251 | orchestrator | 2025-09-23 07:58:40.532261 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-23 07:58:40.532288 | orchestrator | 2025-09-23 07:58:40.532299 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-23 07:58:40.532310 | orchestrator | 2025-09-23 07:58:40.532320 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-23 07:58:40.532331 | orchestrator | 2025-09-23 07:58:40.532349 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-23 07:58:40.532370 | orchestrator | changed: [localhost] 2025-09-23 07:58:40.532389 | orchestrator | 2025-09-23 07:58:40.532411 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-09-23 07:58:40.532432 | orchestrator | Tuesday 23 September 2025 07:57:12 +0000 (0:05:42.350) 0:05:43.442 ***** 2025-09-23 07:58:40.532452 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2025-09-23 07:58:40.532464 | orchestrator | changed: [localhost] 2025-09-23 07:58:40.532475 | orchestrator | 2025-09-23 07:58:40.532485 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-23 07:58:40.532496 | orchestrator | 2025-09-23 07:58:40.532506 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-23 07:58:40.532517 | orchestrator | Tuesday 23 September 2025 07:57:37 +0000 (0:00:25.673) 0:06:09.115 ***** 2025-09-23 07:58:40.532528 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:58:40.532569 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:58:40.532582 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:58:40.532595 | orchestrator | 2025-09-23 07:58:40.532608 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-23 07:58:40.532621 | orchestrator | Tuesday 23 September 2025 07:57:38 +0000 (0:00:00.330) 0:06:09.446 ***** 2025-09-23 07:58:40.532635 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-09-23 07:58:40.532655 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-09-23 07:58:40.532673 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-09-23 07:58:40.532694 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-09-23 07:58:40.532713 | orchestrator | 2025-09-23 07:58:40.532733 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-09-23 07:58:40.532747 | orchestrator | skipping: no hosts matched 2025-09-23 07:58:40.532761 | orchestrator | 2025-09-23 07:58:40.532773 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:58:40.532786 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:58:40.532800 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:58:40.532816 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:58:40.532829 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:58:40.532840 | orchestrator | 2025-09-23 07:58:40.532856 | orchestrator | 2025-09-23 07:58:40.532875 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:58:40.532904 | orchestrator | Tuesday 23 September 2025 07:57:38 +0000 (0:00:00.459) 0:06:09.905 ***** 2025-09-23 07:58:40.532922 | orchestrator | =============================================================================== 2025-09-23 07:58:40.532940 | orchestrator | Download ironic-agent initramfs --------------------------------------- 342.35s 2025-09-23 07:58:40.532958 | orchestrator | Download ironic-agent kernel ------------------------------------------- 25.67s 2025-09-23 07:58:40.532974 | orchestrator | Ensure the destination directory exists --------------------------------- 0.92s 2025-09-23 07:58:40.532993 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2025-09-23 07:58:40.533010 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2025-09-23 07:58:40.533029 | orchestrator | 2025-09-23 07:58:40.533049 | orchestrator | 2025-09-23 07:58:40.533067 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-23 07:58:40.533084 | orchestrator | 2025-09-23 07:58:40.533095 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-23 07:58:40.533107 | orchestrator | Tuesday 23 September 2025 07:57:43 +0000 (0:00:00.273) 0:00:00.273 ***** 2025-09-23 07:58:40.533118 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:58:40.533128 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:58:40.533139 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:58:40.533150 | orchestrator | 2025-09-23 07:58:40.533164 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-23 07:58:40.533183 | orchestrator | Tuesday 23 September 2025 07:57:43 +0000 (0:00:00.299) 0:00:00.573 ***** 2025-09-23 07:58:40.533217 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-09-23 07:58:40.533237 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-09-23 07:58:40.533255 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-09-23 07:58:40.533272 | orchestrator | 2025-09-23 07:58:40.533283 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-09-23 07:58:40.533294 | orchestrator | 2025-09-23 07:58:40.533305 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-23 07:58:40.533361 | orchestrator | Tuesday 23 September 2025 07:57:43 +0000 (0:00:00.425) 0:00:00.999 ***** 2025-09-23 07:58:40.533373 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:58:40.533384 | orchestrator | 2025-09-23 07:58:40.533394 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-09-23 07:58:40.533405 | orchestrator | Tuesday 23 September 2025 07:57:44 +0000 (0:00:00.622) 0:00:01.621 ***** 2025-09-23 07:58:40.533416 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-09-23 07:58:40.533427 | orchestrator | 2025-09-23 07:58:40.533437 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-09-23 07:58:40.533448 | orchestrator | Tuesday 23 September 2025 07:57:47 +0000 (0:00:03.436) 0:00:05.058 ***** 2025-09-23 07:58:40.533458 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-09-23 07:58:40.533473 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-09-23 07:58:40.533491 | orchestrator | 2025-09-23 07:58:40.533508 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-09-23 07:58:40.533528 | orchestrator | Tuesday 23 September 2025 07:57:54 +0000 (0:00:06.748) 0:00:11.806 ***** 2025-09-23 07:58:40.533573 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-23 07:58:40.533590 | orchestrator | 2025-09-23 07:58:40.533601 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-09-23 07:58:40.533611 | orchestrator | Tuesday 23 September 2025 07:57:58 +0000 (0:00:03.632) 0:00:15.438 ***** 2025-09-23 07:58:40.533622 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-23 07:58:40.533633 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-23 07:58:40.533654 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-23 07:58:40.533664 | orchestrator | 2025-09-23 07:58:40.533675 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-09-23 07:58:40.533686 | orchestrator | Tuesday 23 September 2025 07:58:06 +0000 (0:00:08.285) 0:00:23.724 ***** 2025-09-23 07:58:40.533696 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-23 07:58:40.533707 | orchestrator | 2025-09-23 07:58:40.533717 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-09-23 07:58:40.533728 | orchestrator | Tuesday 23 September 2025 07:58:09 +0000 (0:00:03.385) 0:00:27.110 ***** 2025-09-23 07:58:40.533739 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-23 07:58:40.533749 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-23 07:58:40.533760 | orchestrator | 2025-09-23 07:58:40.533771 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-09-23 07:58:40.533781 | orchestrator | Tuesday 23 September 2025 07:58:17 +0000 (0:00:07.511) 0:00:34.622 ***** 2025-09-23 07:58:40.533795 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-09-23 07:58:40.533813 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-09-23 07:58:40.533830 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-09-23 07:58:40.533850 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-09-23 07:58:40.533868 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-09-23 07:58:40.533886 | orchestrator | 2025-09-23 07:58:40.533898 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-23 07:58:40.533909 | orchestrator | Tuesday 23 September 2025 07:58:34 +0000 (0:00:16.783) 0:00:51.405 ***** 2025-09-23 07:58:40.533919 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:58:40.533930 | orchestrator | 2025-09-23 07:58:40.533941 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-09-23 07:58:40.533951 | orchestrator | Tuesday 23 September 2025 07:58:34 +0000 (0:00:00.527) 0:00:51.933 ***** 2025-09-23 07:58:40.533966 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "extra_data": {"data": null, "details": "503 Service Unavailable: No server is available to handle this request.", "response": "

503 Service Unavailable

\nNo server is available to handle this request.\n\n"}, "msg": "HttpException: 503: Server Error for url: https://api-int.testbed.osism.xyz:8774/v2.1/flavors/amphora, 503 Service Unavailable: No server is available to handle this request."} 2025-09-23 07:58:40.533980 | orchestrator | 2025-09-23 07:58:40.533991 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:58:40.534009 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-09-23 07:58:40.534074 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:58:40.534096 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:58:40.534108 | orchestrator | 2025-09-23 07:58:40.534180 | orchestrator | 2025-09-23 07:58:40.534204 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:58:40.534225 | orchestrator | Tuesday 23 September 2025 07:58:38 +0000 (0:00:03.944) 0:00:55.877 ***** 2025-09-23 07:58:40.534236 | orchestrator | =============================================================================== 2025-09-23 07:58:40.534247 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.78s 2025-09-23 07:58:40.534258 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.29s 2025-09-23 07:58:40.534281 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.51s 2025-09-23 07:58:40.534292 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.75s 2025-09-23 07:58:40.534303 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.94s 2025-09-23 07:58:40.534313 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.63s 2025-09-23 07:58:40.534324 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.44s 2025-09-23 07:58:40.534334 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.39s 2025-09-23 07:58:40.534344 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.62s 2025-09-23 07:58:40.534355 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.53s 2025-09-23 07:58:40.534365 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2025-09-23 07:58:40.534376 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2025-09-23 07:58:40.534386 | orchestrator | 2025-09-23 07:58:40 | INFO  | Task 6687c471-7e3d-4029-a156-87c33e5aa0c7 is in state SUCCESS 2025-09-23 07:58:40.534484 | orchestrator | 2025-09-23 07:58:40 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:58:40.534499 | orchestrator | 2025-09-23 07:58:40 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:58:43.580343 | orchestrator | 2025-09-23 07:58:43 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:58:43.580835 | orchestrator | 2025-09-23 07:58:43 | INFO  | Task 7a257bc7-7a86-487f-b949-8d3e07abc091 is in state SUCCESS 2025-09-23 07:58:43.581974 | orchestrator | 2025-09-23 07:58:43 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:58:43.582009 | orchestrator | 2025-09-23 07:58:43 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:58:46.627401 | orchestrator | 2025-09-23 07:58:46 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:58:46.629200 | orchestrator | 2025-09-23 07:58:46 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:58:46.629233 | orchestrator | 2025-09-23 07:58:46 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:58:49.669117 | orchestrator | 2025-09-23 07:58:49 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:58:49.672176 | orchestrator | 2025-09-23 07:58:49 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:58:49.672221 | orchestrator | 2025-09-23 07:58:49 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:58:52.720902 | orchestrator | 2025-09-23 07:58:52 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:58:52.722145 | orchestrator | 2025-09-23 07:58:52 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:58:52.722277 | orchestrator | 2025-09-23 07:58:52 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:58:55.766618 | orchestrator | 2025-09-23 07:58:55 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:58:55.767329 | orchestrator | 2025-09-23 07:58:55 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:58:55.767359 | orchestrator | 2025-09-23 07:58:55 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:58:58.797624 | orchestrator | 2025-09-23 07:58:58 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:58:58.797833 | orchestrator | 2025-09-23 07:58:58 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:58:58.797905 | orchestrator | 2025-09-23 07:58:58 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:59:01.847952 | orchestrator | 2025-09-23 07:59:01 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:59:01.850116 | orchestrator | 2025-09-23 07:59:01 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:59:01.850144 | orchestrator | 2025-09-23 07:59:01 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:59:04.891154 | orchestrator | 2025-09-23 07:59:04 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:59:04.892851 | orchestrator | 2025-09-23 07:59:04 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:59:04.893634 | orchestrator | 2025-09-23 07:59:04 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:59:07.936477 | orchestrator | 2025-09-23 07:59:07 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state STARTED 2025-09-23 07:59:07.936731 | orchestrator | 2025-09-23 07:59:07 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:59:07.936754 | orchestrator | 2025-09-23 07:59:07 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:59:10.972126 | orchestrator | 2025-09-23 07:59:10 | INFO  | Task a8840c64-c6b6-4abc-924e-0fe70da08ac9 is in state SUCCESS 2025-09-23 07:59:10.973521 | orchestrator | 2025-09-23 07:59:10.973545 | orchestrator | 2025-09-23 07:59:10.973551 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-23 07:59:10.973555 | orchestrator | 2025-09-23 07:59:10.973560 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-23 07:59:10.973565 | orchestrator | Tuesday 23 September 2025 07:57:32 +0000 (0:00:00.184) 0:00:00.184 ***** 2025-09-23 07:59:10.973569 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:59:10.973574 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:59:10.973578 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:59:10.973582 | orchestrator | 2025-09-23 07:59:10.973586 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-23 07:59:10.973590 | orchestrator | Tuesday 23 September 2025 07:57:33 +0000 (0:00:00.323) 0:00:00.507 ***** 2025-09-23 07:59:10.973594 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-09-23 07:59:10.973598 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-09-23 07:59:10.973602 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-09-23 07:59:10.973606 | orchestrator | 2025-09-23 07:59:10.973610 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-09-23 07:59:10.973613 | orchestrator | 2025-09-23 07:59:10.973617 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-09-23 07:59:10.973621 | orchestrator | Tuesday 23 September 2025 07:57:33 +0000 (0:00:00.619) 0:00:01.127 ***** 2025-09-23 07:59:10.973626 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:59:10.973629 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:59:10.973633 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:59:10.973637 | orchestrator | 2025-09-23 07:59:10.973641 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:59:10.973645 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:59:10.973651 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:59:10.973655 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 07:59:10.973658 | orchestrator | 2025-09-23 07:59:10.973662 | orchestrator | 2025-09-23 07:59:10.973666 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:59:10.973684 | orchestrator | Tuesday 23 September 2025 07:58:41 +0000 (0:01:07.795) 0:01:08.922 ***** 2025-09-23 07:59:10.973688 | orchestrator | =============================================================================== 2025-09-23 07:59:10.973692 | orchestrator | Waiting for Nova public port to be UP ---------------------------------- 67.80s 2025-09-23 07:59:10.973695 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.62s 2025-09-23 07:59:10.973699 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2025-09-23 07:59:10.973703 | orchestrator | 2025-09-23 07:59:10.973706 | orchestrator | 2025-09-23 07:59:10.973710 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-23 07:59:10.973714 | orchestrator | 2025-09-23 07:59:10.973717 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-23 07:59:10.973721 | orchestrator | Tuesday 23 September 2025 07:56:54 +0000 (0:00:00.251) 0:00:00.251 ***** 2025-09-23 07:59:10.973725 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:59:10.973729 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:59:10.973732 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:59:10.973736 | orchestrator | 2025-09-23 07:59:10.973740 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-23 07:59:10.973744 | orchestrator | Tuesday 23 September 2025 07:56:54 +0000 (0:00:00.249) 0:00:00.501 ***** 2025-09-23 07:59:10.973747 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-09-23 07:59:10.973751 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-09-23 07:59:10.973755 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-09-23 07:59:10.973758 | orchestrator | 2025-09-23 07:59:10.973762 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-09-23 07:59:10.973766 | orchestrator | 2025-09-23 07:59:10.973769 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-23 07:59:10.973773 | orchestrator | Tuesday 23 September 2025 07:56:54 +0000 (0:00:00.361) 0:00:00.863 ***** 2025-09-23 07:59:10.973788 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:59:10.973792 | orchestrator | 2025-09-23 07:59:10.973796 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-09-23 07:59:10.973800 | orchestrator | Tuesday 23 September 2025 07:56:55 +0000 (0:00:00.471) 0:00:01.335 ***** 2025-09-23 07:59:10.973805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-23 07:59:10.973820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-23 07:59:10.973825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-23 07:59:10.973834 | orchestrator | 2025-09-23 07:59:10.973838 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-09-23 07:59:10.973841 | orchestrator | Tuesday 23 September 2025 07:56:55 +0000 (0:00:00.714) 0:00:02.050 ***** 2025-09-23 07:59:10.973845 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-09-23 07:59:10.973850 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-09-23 07:59:10.973854 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-23 07:59:10.973858 | orchestrator | 2025-09-23 07:59:10.973862 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-23 07:59:10.973865 | orchestrator | Tuesday 23 September 2025 07:56:56 +0000 (0:00:00.746) 0:00:02.796 ***** 2025-09-23 07:59:10.973869 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 07:59:10.973873 | orchestrator | 2025-09-23 07:59:10.973877 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-09-23 07:59:10.973881 | orchestrator | Tuesday 23 September 2025 07:56:57 +0000 (0:00:00.685) 0:00:03.482 ***** 2025-09-23 07:59:10.973885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-23 07:59:10.973892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-23 07:59:10.973896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-23 07:59:10.973899 | orchestrator | 2025-09-23 07:59:10.973903 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-09-23 07:59:10.973909 | orchestrator | Tuesday 23 September 2025 07:56:58 +0000 (0:00:01.201) 0:00:04.683 ***** 2025-09-23 07:59:10.973913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-23 07:59:10.973921 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:59:10.973925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-23 07:59:10.973929 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:59:10.973933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-23 07:59:10.973937 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:59:10.973940 | orchestrator | 2025-09-23 07:59:10.973944 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-09-23 07:59:10.973948 | orchestrator | Tuesday 23 September 2025 07:56:58 +0000 (0:00:00.353) 0:00:05.037 ***** 2025-09-23 07:59:10.973952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-23 07:59:10.973958 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:59:10.973962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-23 07:59:10.973966 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:59:10.973974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-23 07:59:10.973981 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:59:10.973984 | orchestrator | 2025-09-23 07:59:10.973988 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-09-23 07:59:10.973992 | orchestrator | Tuesday 23 September 2025 07:56:59 +0000 (0:00:00.766) 0:00:05.803 ***** 2025-09-23 07:59:10.973996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-23 07:59:10.974000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-23 07:59:10.974004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-23 07:59:10.974008 | orchestrator | 2025-09-23 07:59:10.974011 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-09-23 07:59:10.974046 | orchestrator | Tuesday 23 September 2025 07:57:00 +0000 (0:00:01.230) 0:00:07.034 ***** 2025-09-23 07:59:10.974054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-23 07:59:10.974058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-23 07:59:10.974072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-23 07:59:10.974076 | orchestrator | 2025-09-23 07:59:10.974079 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-09-23 07:59:10.974083 | orchestrator | Tuesday 23 September 2025 07:57:02 +0000 (0:00:01.357) 0:00:08.391 ***** 2025-09-23 07:59:10.974087 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:59:10.974091 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:59:10.974094 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:59:10.974098 | orchestrator | 2025-09-23 07:59:10.974102 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-09-23 07:59:10.974106 | orchestrator | Tuesday 23 September 2025 07:57:02 +0000 (0:00:00.418) 0:00:08.809 ***** 2025-09-23 07:59:10.974110 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-23 07:59:10.974113 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-23 07:59:10.974117 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-23 07:59:10.974121 | orchestrator | 2025-09-23 07:59:10.974125 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-09-23 07:59:10.974128 | orchestrator | Tuesday 23 September 2025 07:57:03 +0000 (0:00:01.319) 0:00:10.129 ***** 2025-09-23 07:59:10.974132 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-23 07:59:10.974136 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-23 07:59:10.974140 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-23 07:59:10.974144 | orchestrator | 2025-09-23 07:59:10.974148 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-09-23 07:59:10.974151 | orchestrator | Tuesday 23 September 2025 07:57:05 +0000 (0:00:01.217) 0:00:11.346 ***** 2025-09-23 07:59:10.974155 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-23 07:59:10.974159 | orchestrator | 2025-09-23 07:59:10.974163 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-09-23 07:59:10.974166 | orchestrator | Tuesday 23 September 2025 07:57:05 +0000 (0:00:00.659) 0:00:12.006 ***** 2025-09-23 07:59:10.974170 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-09-23 07:59:10.974174 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-09-23 07:59:10.974178 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:59:10.974181 | orchestrator | ok: [testbed-node-1] 2025-09-23 07:59:10.974185 | orchestrator | ok: [testbed-node-2] 2025-09-23 07:59:10.974189 | orchestrator | 2025-09-23 07:59:10.974193 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-09-23 07:59:10.974200 | orchestrator | Tuesday 23 September 2025 07:57:06 +0000 (0:00:00.648) 0:00:12.655 ***** 2025-09-23 07:59:10.974204 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:59:10.974207 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:59:10.974211 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:59:10.974215 | orchestrator | 2025-09-23 07:59:10.974218 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-09-23 07:59:10.974222 | orchestrator | Tuesday 23 September 2025 07:57:06 +0000 (0:00:00.512) 0:00:13.168 ***** 2025-09-23 07:59:10.974284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1320273, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1214318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1320273, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1214318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1320273, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1214318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1320486, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1916447, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1320486, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1916447, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1320486, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1916447, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1320369, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1259072, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1320369, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1259072, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1320369, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1259072, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1320620, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1936448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1320620, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1936448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1320620, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1936448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1320404, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1338105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1320404, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1338105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1320404, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1338105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1320477, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1514056, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1320477, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1514056, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1320477, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1514056, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1320271, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.0789452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1320271, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.0789452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1320271, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.0789452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1320356, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1216435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1320356, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1216435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1320356, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1216435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1320376, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1266437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1320376, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1266437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1320376, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1266437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1320410, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1356437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1320410, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1356437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1320410, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1356437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1320483, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.153644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1320483, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.153644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1320483, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.153644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1320359, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1234722, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1320359, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1234722, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1320359, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1234722, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1320472, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1500616, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1320472, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1500616, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1320472, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1500616, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1320406, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1348183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1320406, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1348183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1320406, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1348183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1320399, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.132806, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1320399, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.132806, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1320399, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.132806, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1320392, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1296437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1320392, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1296437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1320392, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1296437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1320414, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.149175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1320414, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.149175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1320414, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.149175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1320383, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1284838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1320383, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1284838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1320383, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1284838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1320480, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.152644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1320480, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.152644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1320480, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.152644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1320870, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2636461, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1320870, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2636461, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1320870, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2636461, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1320654, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2199655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1320654, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2199655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1320654, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2199655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1320628, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1966448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1320628, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1966448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1320628, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1966448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1320708, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.224713, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1320708, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.224713, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1320708, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.224713, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1320623, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1946523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.974787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1320623, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1946523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1320623, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1946523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1320827, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2529614, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1320827, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2529614, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1320827, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2529614, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1320709, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2338042, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1320709, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2338042, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1320709, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2338042, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1320834, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.255163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1320834, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.255163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1320865, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.262764, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1320834, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.255163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1320865, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.262764, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1320824, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2506459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1320865, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.262764, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1320824, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2506459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1320824, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2506459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1320701, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2226453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1320701, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2226453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1320701, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2226453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1320649, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.204645, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1320649, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.204645, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1320649, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.204645, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1320698, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2216454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1320698, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2216454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1320698, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2216454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1320629, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2024434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1320629, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2024434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1320629, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2024434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1320705, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.223695, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1320705, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.223695, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1320850, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2619572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1320705, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.223695, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1320850, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2619572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1320842, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.257646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1320850, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2619572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1320842, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.257646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1320624, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.194894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1320842, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.257646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1320624, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.194894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1320625, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1966448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1320624, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.194894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1320625, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1966448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1320740, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2492669, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1320740, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2492669, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1320625, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.1966448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1320840, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2557313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1320840, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2557313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1320740, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2492669, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1320840, 'dev': 110, 'nlink': 1, 'atime': 1758585732.0, 'mtime': 1758585732.0, 'ctime': 1758611225.2557313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-23 07:59:10.975540 | orchestrator | 2025-09-23 07:59:10.975544 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-09-23 07:59:10.975548 | orchestrator | Tuesday 23 September 2025 07:57:46 +0000 (0:00:39.421) 0:00:52.589 ***** 2025-09-23 07:59:10.975552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-23 07:59:10.975560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-23 07:59:10.975564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-23 07:59:10.975568 | orchestrator | 2025-09-23 07:59:10.975571 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-09-23 07:59:10.975575 | orchestrator | Tuesday 23 September 2025 07:57:47 +0000 (0:00:01.027) 0:00:53.617 ***** 2025-09-23 07:59:10.975579 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:59:10.975583 | orchestrator | 2025-09-23 07:59:10.975587 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-09-23 07:59:10.975590 | orchestrator | Tuesday 23 September 2025 07:57:49 +0000 (0:00:02.283) 0:00:55.900 ***** 2025-09-23 07:59:10.975594 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:59:10.975598 | orchestrator | 2025-09-23 07:59:10.975601 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-23 07:59:10.975605 | orchestrator | Tuesday 23 September 2025 07:57:51 +0000 (0:00:02.163) 0:00:58.064 ***** 2025-09-23 07:59:10.975609 | orchestrator | 2025-09-23 07:59:10.975612 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-23 07:59:10.975616 | orchestrator | Tuesday 23 September 2025 07:57:51 +0000 (0:00:00.060) 0:00:58.124 ***** 2025-09-23 07:59:10.975620 | orchestrator | 2025-09-23 07:59:10.975623 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-23 07:59:10.975627 | orchestrator | Tuesday 23 September 2025 07:57:52 +0000 (0:00:00.067) 0:00:58.191 ***** 2025-09-23 07:59:10.975631 | orchestrator | 2025-09-23 07:59:10.975634 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-09-23 07:59:10.975641 | orchestrator | Tuesday 23 September 2025 07:57:52 +0000 (0:00:00.238) 0:00:58.429 ***** 2025-09-23 07:59:10.975645 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:59:10.975648 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:59:10.975652 | orchestrator | changed: [testbed-node-0] 2025-09-23 07:59:10.975656 | orchestrator | 2025-09-23 07:59:10.975659 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-09-23 07:59:10.975663 | orchestrator | Tuesday 23 September 2025 07:57:54 +0000 (0:00:01.873) 0:01:00.303 ***** 2025-09-23 07:59:10.975667 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:59:10.975671 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:59:10.975674 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-09-23 07:59:10.975679 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-09-23 07:59:10.975685 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-09-23 07:59:10.975689 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:59:10.975693 | orchestrator | 2025-09-23 07:59:10.975697 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-09-23 07:59:10.975708 | orchestrator | Tuesday 23 September 2025 07:58:32 +0000 (0:00:38.653) 0:01:38.956 ***** 2025-09-23 07:59:10.975712 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:59:10.975715 | orchestrator | changed: [testbed-node-2] 2025-09-23 07:59:10.975725 | orchestrator | changed: [testbed-node-1] 2025-09-23 07:59:10.975729 | orchestrator | 2025-09-23 07:59:10.975732 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-09-23 07:59:10.975736 | orchestrator | Tuesday 23 September 2025 07:59:02 +0000 (0:00:29.466) 0:02:08.423 ***** 2025-09-23 07:59:10.975740 | orchestrator | ok: [testbed-node-0] 2025-09-23 07:59:10.975743 | orchestrator | 2025-09-23 07:59:10.975747 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-09-23 07:59:10.975751 | orchestrator | Tuesday 23 September 2025 07:59:04 +0000 (0:00:02.240) 0:02:10.663 ***** 2025-09-23 07:59:10.975754 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:59:10.975758 | orchestrator | skipping: [testbed-node-1] 2025-09-23 07:59:10.975762 | orchestrator | skipping: [testbed-node-2] 2025-09-23 07:59:10.975765 | orchestrator | 2025-09-23 07:59:10.975769 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-09-23 07:59:10.975773 | orchestrator | Tuesday 23 September 2025 07:59:04 +0000 (0:00:00.517) 0:02:11.181 ***** 2025-09-23 07:59:10.975781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-09-23 07:59:10.975786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-09-23 07:59:10.975790 | orchestrator | 2025-09-23 07:59:10.975794 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-09-23 07:59:10.975798 | orchestrator | Tuesday 23 September 2025 07:59:07 +0000 (0:00:02.475) 0:02:13.657 ***** 2025-09-23 07:59:10.975802 | orchestrator | skipping: [testbed-node-0] 2025-09-23 07:59:10.975805 | orchestrator | 2025-09-23 07:59:10.975809 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 07:59:10.975813 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-23 07:59:10.975820 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-23 07:59:10.975824 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-23 07:59:10.975827 | orchestrator | 2025-09-23 07:59:10.975831 | orchestrator | 2025-09-23 07:59:10.975835 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 07:59:10.975838 | orchestrator | Tuesday 23 September 2025 07:59:07 +0000 (0:00:00.255) 0:02:13.912 ***** 2025-09-23 07:59:10.975842 | orchestrator | =============================================================================== 2025-09-23 07:59:10.975846 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 39.42s 2025-09-23 07:59:10.975849 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.65s 2025-09-23 07:59:10.975854 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 29.47s 2025-09-23 07:59:10.975858 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.48s 2025-09-23 07:59:10.975862 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.28s 2025-09-23 07:59:10.975866 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.24s 2025-09-23 07:59:10.975870 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.16s 2025-09-23 07:59:10.975874 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.87s 2025-09-23 07:59:10.975879 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.36s 2025-09-23 07:59:10.975883 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.32s 2025-09-23 07:59:10.975887 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.23s 2025-09-23 07:59:10.975891 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.22s 2025-09-23 07:59:10.975895 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.20s 2025-09-23 07:59:10.975899 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.03s 2025-09-23 07:59:10.975903 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.77s 2025-09-23 07:59:10.975908 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.75s 2025-09-23 07:59:10.975912 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.71s 2025-09-23 07:59:10.975918 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.69s 2025-09-23 07:59:10.975923 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.66s 2025-09-23 07:59:10.975927 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.65s 2025-09-23 07:59:10.975932 | orchestrator | 2025-09-23 07:59:10 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:59:10.975936 | orchestrator | 2025-09-23 07:59:10 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:59:14.026843 | orchestrator | 2025-09-23 07:59:14 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:59:14.026930 | orchestrator | 2025-09-23 07:59:14 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:59:17.064193 | orchestrator | 2025-09-23 07:59:17 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:59:17.064290 | orchestrator | 2025-09-23 07:59:17 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:59:20.104280 | orchestrator | 2025-09-23 07:59:20 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:59:20.104348 | orchestrator | 2025-09-23 07:59:20 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:59:23.143457 | orchestrator | 2025-09-23 07:59:23 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:59:23.143822 | orchestrator | 2025-09-23 07:59:23 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:59:26.183738 | orchestrator | 2025-09-23 07:59:26 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:59:26.183832 | orchestrator | 2025-09-23 07:59:26 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:59:29.212595 | orchestrator | 2025-09-23 07:59:29 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:59:29.212700 | orchestrator | 2025-09-23 07:59:29 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:59:32.257582 | orchestrator | 2025-09-23 07:59:32 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:59:32.257683 | orchestrator | 2025-09-23 07:59:32 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:59:35.283712 | orchestrator | 2025-09-23 07:59:35 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:59:35.283791 | orchestrator | 2025-09-23 07:59:35 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:59:38.329780 | orchestrator | 2025-09-23 07:59:38 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:59:38.329884 | orchestrator | 2025-09-23 07:59:38 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:59:41.374723 | orchestrator | 2025-09-23 07:59:41 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:59:41.374935 | orchestrator | 2025-09-23 07:59:41 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:59:44.423655 | orchestrator | 2025-09-23 07:59:44 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:59:44.423781 | orchestrator | 2025-09-23 07:59:44 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:59:47.447974 | orchestrator | 2025-09-23 07:59:47 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:59:47.448097 | orchestrator | 2025-09-23 07:59:47 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:59:50.489042 | orchestrator | 2025-09-23 07:59:50 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:59:50.489119 | orchestrator | 2025-09-23 07:59:50 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:59:53.527152 | orchestrator | 2025-09-23 07:59:53 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:59:53.527246 | orchestrator | 2025-09-23 07:59:53 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:59:56.569570 | orchestrator | 2025-09-23 07:59:56 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:59:56.569666 | orchestrator | 2025-09-23 07:59:56 | INFO  | Wait 1 second(s) until the next check 2025-09-23 07:59:59.604026 | orchestrator | 2025-09-23 07:59:59 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 07:59:59.604127 | orchestrator | 2025-09-23 07:59:59 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:00:02.669224 | orchestrator | 2025-09-23 08:00:02 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:00:02.669326 | orchestrator | 2025-09-23 08:00:02 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:00:05.722196 | orchestrator | 2025-09-23 08:00:05 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:00:05.722404 | orchestrator | 2025-09-23 08:00:05 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:00:08.767206 | orchestrator | 2025-09-23 08:00:08 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:00:08.767278 | orchestrator | 2025-09-23 08:00:08 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:00:11.823270 | orchestrator | 2025-09-23 08:00:11 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:00:11.823366 | orchestrator | 2025-09-23 08:00:11 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:00:14.861877 | orchestrator | 2025-09-23 08:00:14 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:00:14.862000 | orchestrator | 2025-09-23 08:00:14 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:00:17.904792 | orchestrator | 2025-09-23 08:00:17 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:00:17.904930 | orchestrator | 2025-09-23 08:00:17 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:00:20.947279 | orchestrator | 2025-09-23 08:00:20 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:00:20.947371 | orchestrator | 2025-09-23 08:00:20 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:00:23.992037 | orchestrator | 2025-09-23 08:00:23 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:00:23.992167 | orchestrator | 2025-09-23 08:00:23 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:00:27.041923 | orchestrator | 2025-09-23 08:00:27 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:00:27.042081 | orchestrator | 2025-09-23 08:00:27 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:00:30.083881 | orchestrator | 2025-09-23 08:00:30 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:00:30.083990 | orchestrator | 2025-09-23 08:00:30 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:00:33.119543 | orchestrator | 2025-09-23 08:00:33 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:00:33.119630 | orchestrator | 2025-09-23 08:00:33 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:00:36.160189 | orchestrator | 2025-09-23 08:00:36 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:00:36.160283 | orchestrator | 2025-09-23 08:00:36 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:00:39.212218 | orchestrator | 2025-09-23 08:00:39 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:00:39.212329 | orchestrator | 2025-09-23 08:00:39 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:00:42.258697 | orchestrator | 2025-09-23 08:00:42 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:00:42.258804 | orchestrator | 2025-09-23 08:00:42 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:00:45.308364 | orchestrator | 2025-09-23 08:00:45 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:00:45.308562 | orchestrator | 2025-09-23 08:00:45 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:00:48.354495 | orchestrator | 2025-09-23 08:00:48 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:00:48.354625 | orchestrator | 2025-09-23 08:00:48 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:00:51.411561 | orchestrator | 2025-09-23 08:00:51 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:00:51.411670 | orchestrator | 2025-09-23 08:00:51 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:00:54.458170 | orchestrator | 2025-09-23 08:00:54 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:00:54.458344 | orchestrator | 2025-09-23 08:00:54 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:00:57.494824 | orchestrator | 2025-09-23 08:00:57 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:00:57.494948 | orchestrator | 2025-09-23 08:00:57 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:01:00.538584 | orchestrator | 2025-09-23 08:01:00 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:01:00.540021 | orchestrator | 2025-09-23 08:01:00 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:01:03.578632 | orchestrator | 2025-09-23 08:01:03 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:01:03.578710 | orchestrator | 2025-09-23 08:01:03 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:01:06.611601 | orchestrator | 2025-09-23 08:01:06 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:01:06.611696 | orchestrator | 2025-09-23 08:01:06 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:01:09.652189 | orchestrator | 2025-09-23 08:01:09 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:01:09.652270 | orchestrator | 2025-09-23 08:01:09 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:01:12.692957 | orchestrator | 2025-09-23 08:01:12 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:01:12.693225 | orchestrator | 2025-09-23 08:01:12 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:01:15.737880 | orchestrator | 2025-09-23 08:01:15 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:01:15.737973 | orchestrator | 2025-09-23 08:01:15 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:01:18.783897 | orchestrator | 2025-09-23 08:01:18 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:01:18.784017 | orchestrator | 2025-09-23 08:01:18 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:01:21.824791 | orchestrator | 2025-09-23 08:01:21 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:01:21.824889 | orchestrator | 2025-09-23 08:01:21 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:01:24.870820 | orchestrator | 2025-09-23 08:01:24 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:01:24.870920 | orchestrator | 2025-09-23 08:01:24 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:01:27.916459 | orchestrator | 2025-09-23 08:01:27 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:01:27.916579 | orchestrator | 2025-09-23 08:01:27 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:01:30.954912 | orchestrator | 2025-09-23 08:01:30 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:01:30.954984 | orchestrator | 2025-09-23 08:01:30 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:01:33.989019 | orchestrator | 2025-09-23 08:01:33 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:01:33.989103 | orchestrator | 2025-09-23 08:01:33 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:01:37.035655 | orchestrator | 2025-09-23 08:01:37 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:01:37.035764 | orchestrator | 2025-09-23 08:01:37 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:01:40.089510 | orchestrator | 2025-09-23 08:01:40 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:01:40.089658 | orchestrator | 2025-09-23 08:01:40 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:01:43.130823 | orchestrator | 2025-09-23 08:01:43 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:01:43.130909 | orchestrator | 2025-09-23 08:01:43 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:01:46.175771 | orchestrator | 2025-09-23 08:01:46 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:01:46.175871 | orchestrator | 2025-09-23 08:01:46 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:01:49.222666 | orchestrator | 2025-09-23 08:01:49 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:01:49.222760 | orchestrator | 2025-09-23 08:01:49 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:01:52.264598 | orchestrator | 2025-09-23 08:01:52 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:01:52.264696 | orchestrator | 2025-09-23 08:01:52 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:01:55.298574 | orchestrator | 2025-09-23 08:01:55 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:01:55.298683 | orchestrator | 2025-09-23 08:01:55 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:01:58.328089 | orchestrator | 2025-09-23 08:01:58 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:01:58.328231 | orchestrator | 2025-09-23 08:01:58 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:02:01.378871 | orchestrator | 2025-09-23 08:02:01 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:02:01.378968 | orchestrator | 2025-09-23 08:02:01 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:02:04.427978 | orchestrator | 2025-09-23 08:02:04 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:02:04.428072 | orchestrator | 2025-09-23 08:02:04 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:02:07.471415 | orchestrator | 2025-09-23 08:02:07 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:02:07.471516 | orchestrator | 2025-09-23 08:02:07 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:02:10.518006 | orchestrator | 2025-09-23 08:02:10 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:02:10.518185 | orchestrator | 2025-09-23 08:02:10 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:02:13.564655 | orchestrator | 2025-09-23 08:02:13 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:02:13.564760 | orchestrator | 2025-09-23 08:02:13 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:02:16.667416 | orchestrator | 2025-09-23 08:02:16 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:02:16.667513 | orchestrator | 2025-09-23 08:02:16 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:02:19.638396 | orchestrator | 2025-09-23 08:02:19 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:02:19.638482 | orchestrator | 2025-09-23 08:02:19 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:02:22.658520 | orchestrator | 2025-09-23 08:02:22 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:02:22.658603 | orchestrator | 2025-09-23 08:02:22 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:02:25.693773 | orchestrator | 2025-09-23 08:02:25 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:02:25.693858 | orchestrator | 2025-09-23 08:02:25 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:02:28.734077 | orchestrator | 2025-09-23 08:02:28 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:02:28.734168 | orchestrator | 2025-09-23 08:02:28 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:02:31.778343 | orchestrator | 2025-09-23 08:02:31 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:02:31.778453 | orchestrator | 2025-09-23 08:02:31 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:02:34.819762 | orchestrator | 2025-09-23 08:02:34 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:02:34.819856 | orchestrator | 2025-09-23 08:02:34 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:02:37.862574 | orchestrator | 2025-09-23 08:02:37 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:02:37.862677 | orchestrator | 2025-09-23 08:02:37 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:02:40.905484 | orchestrator | 2025-09-23 08:02:40 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:02:40.905608 | orchestrator | 2025-09-23 08:02:40 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:02:43.952685 | orchestrator | 2025-09-23 08:02:43 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:02:43.952805 | orchestrator | 2025-09-23 08:02:43 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:02:47.001212 | orchestrator | 2025-09-23 08:02:46 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:02:47.001318 | orchestrator | 2025-09-23 08:02:47 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:02:50.046730 | orchestrator | 2025-09-23 08:02:50 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:02:50.046836 | orchestrator | 2025-09-23 08:02:50 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:02:53.086822 | orchestrator | 2025-09-23 08:02:53 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:02:53.086929 | orchestrator | 2025-09-23 08:02:53 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:02:56.125462 | orchestrator | 2025-09-23 08:02:56 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:02:56.125594 | orchestrator | 2025-09-23 08:02:56 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:02:59.168624 | orchestrator | 2025-09-23 08:02:59 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state STARTED 2025-09-23 08:02:59.168719 | orchestrator | 2025-09-23 08:02:59 | INFO  | Wait 1 second(s) until the next check 2025-09-23 08:03:02.211198 | orchestrator | 2025-09-23 08:03:02.211333 | orchestrator | 2025-09-23 08:03:02.211351 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-23 08:03:02.211379 | orchestrator | 2025-09-23 08:03:02.211391 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-09-23 08:03:02.211412 | orchestrator | Tuesday 23 September 2025 07:54:40 +0000 (0:00:00.283) 0:00:00.283 ***** 2025-09-23 08:03:02.211424 | orchestrator | changed: [testbed-manager] 2025-09-23 08:03:02.211437 | orchestrator | changed: [testbed-node-0] 2025-09-23 08:03:02.211448 | orchestrator | changed: [testbed-node-1] 2025-09-23 08:03:02.211459 | orchestrator | changed: [testbed-node-2] 2025-09-23 08:03:02.211470 | orchestrator | changed: [testbed-node-3] 2025-09-23 08:03:02.211482 | orchestrator | changed: [testbed-node-4] 2025-09-23 08:03:02.211517 | orchestrator | changed: [testbed-node-5] 2025-09-23 08:03:02.211529 | orchestrator | 2025-09-23 08:03:02.211540 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-23 08:03:02.211551 | orchestrator | Tuesday 23 September 2025 07:54:41 +0000 (0:00:00.810) 0:00:01.094 ***** 2025-09-23 08:03:02.211562 | orchestrator | changed: [testbed-manager] 2025-09-23 08:03:02.211573 | orchestrator | changed: [testbed-node-0] 2025-09-23 08:03:02.211583 | orchestrator | changed: [testbed-node-1] 2025-09-23 08:03:02.211594 | orchestrator | changed: [testbed-node-2] 2025-09-23 08:03:02.211605 | orchestrator | changed: [testbed-node-3] 2025-09-23 08:03:02.211615 | orchestrator | changed: [testbed-node-4] 2025-09-23 08:03:02.211806 | orchestrator | changed: [testbed-node-5] 2025-09-23 08:03:02.211824 | orchestrator | 2025-09-23 08:03:02.211838 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-23 08:03:02.211851 | orchestrator | Tuesday 23 September 2025 07:54:41 +0000 (0:00:00.658) 0:00:01.752 ***** 2025-09-23 08:03:02.211864 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-09-23 08:03:02.211877 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-09-23 08:03:02.211890 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-09-23 08:03:02.211903 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-09-23 08:03:02.211915 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-09-23 08:03:02.211928 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-09-23 08:03:02.211940 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-09-23 08:03:02.211953 | orchestrator | 2025-09-23 08:03:02.211966 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-09-23 08:03:02.211979 | orchestrator | 2025-09-23 08:03:02.211992 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-23 08:03:02.212005 | orchestrator | Tuesday 23 September 2025 07:54:42 +0000 (0:00:00.777) 0:00:02.529 ***** 2025-09-23 08:03:02.212017 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 08:03:02.212030 | orchestrator | 2025-09-23 08:03:02.212042 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-09-23 08:03:02.212055 | orchestrator | Tuesday 23 September 2025 07:54:43 +0000 (0:00:00.723) 0:00:03.253 ***** 2025-09-23 08:03:02.212068 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-09-23 08:03:02.212082 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-09-23 08:03:02.212095 | orchestrator | 2025-09-23 08:03:02.212106 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-09-23 08:03:02.212116 | orchestrator | Tuesday 23 September 2025 07:54:47 +0000 (0:00:04.176) 0:00:07.430 ***** 2025-09-23 08:03:02.212127 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-23 08:03:02.212138 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-23 08:03:02.212148 | orchestrator | changed: [testbed-node-0] 2025-09-23 08:03:02.212159 | orchestrator | 2025-09-23 08:03:02.212170 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-23 08:03:02.212181 | orchestrator | Tuesday 23 September 2025 07:54:51 +0000 (0:00:04.535) 0:00:11.965 ***** 2025-09-23 08:03:02.212191 | orchestrator | changed: [testbed-node-0] 2025-09-23 08:03:02.212202 | orchestrator | 2025-09-23 08:03:02.212235 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-09-23 08:03:02.212247 | orchestrator | Tuesday 23 September 2025 07:54:53 +0000 (0:00:01.250) 0:00:13.216 ***** 2025-09-23 08:03:02.212257 | orchestrator | changed: [testbed-node-0] 2025-09-23 08:03:02.212268 | orchestrator | 2025-09-23 08:03:02.212291 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-09-23 08:03:02.212303 | orchestrator | Tuesday 23 September 2025 07:54:54 +0000 (0:00:01.481) 0:00:14.697 ***** 2025-09-23 08:03:02.212314 | orchestrator | changed: [testbed-node-0] 2025-09-23 08:03:02.212324 | orchestrator | 2025-09-23 08:03:02.212335 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-23 08:03:02.212357 | orchestrator | Tuesday 23 September 2025 07:54:57 +0000 (0:00:02.928) 0:00:17.625 ***** 2025-09-23 08:03:02.212368 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.212379 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.212389 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.212400 | orchestrator | 2025-09-23 08:03:02.212411 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-23 08:03:02.212421 | orchestrator | Tuesday 23 September 2025 07:54:57 +0000 (0:00:00.359) 0:00:17.984 ***** 2025-09-23 08:03:02.212432 | orchestrator | ok: [testbed-node-0] 2025-09-23 08:03:02.212443 | orchestrator | 2025-09-23 08:03:02.212453 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-09-23 08:03:02.212464 | orchestrator | Tuesday 23 September 2025 07:55:30 +0000 (0:00:32.649) 0:00:50.634 ***** 2025-09-23 08:03:02.212474 | orchestrator | changed: [testbed-node-0] 2025-09-23 08:03:02.212485 | orchestrator | 2025-09-23 08:03:02.212502 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-23 08:03:02.212539 | orchestrator | Tuesday 23 September 2025 07:55:44 +0000 (0:00:13.908) 0:01:04.542 ***** 2025-09-23 08:03:02.212560 | orchestrator | ok: [testbed-node-0] 2025-09-23 08:03:02.212576 | orchestrator | 2025-09-23 08:03:02.212594 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-23 08:03:02.212613 | orchestrator | Tuesday 23 September 2025 07:55:55 +0000 (0:00:11.365) 0:01:15.908 ***** 2025-09-23 08:03:02.212654 | orchestrator | ok: [testbed-node-0] 2025-09-23 08:03:02.212674 | orchestrator | 2025-09-23 08:03:02.212692 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-09-23 08:03:02.212711 | orchestrator | Tuesday 23 September 2025 07:55:56 +0000 (0:00:00.896) 0:01:16.805 ***** 2025-09-23 08:03:02.212799 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.212810 | orchestrator | 2025-09-23 08:03:02.212821 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-23 08:03:02.212832 | orchestrator | Tuesday 23 September 2025 07:55:57 +0000 (0:00:00.448) 0:01:17.253 ***** 2025-09-23 08:03:02.212844 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 08:03:02.212855 | orchestrator | 2025-09-23 08:03:02.212866 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-23 08:03:02.212876 | orchestrator | Tuesday 23 September 2025 07:55:57 +0000 (0:00:00.527) 0:01:17.780 ***** 2025-09-23 08:03:02.212888 | orchestrator | ok: [testbed-node-0] 2025-09-23 08:03:02.212899 | orchestrator | 2025-09-23 08:03:02.212910 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-23 08:03:02.212920 | orchestrator | Tuesday 23 September 2025 07:56:16 +0000 (0:00:18.326) 0:01:36.107 ***** 2025-09-23 08:03:02.212931 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.212942 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.212953 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.212964 | orchestrator | 2025-09-23 08:03:02.212974 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-09-23 08:03:02.212985 | orchestrator | 2025-09-23 08:03:02.212996 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-23 08:03:02.213007 | orchestrator | Tuesday 23 September 2025 07:56:16 +0000 (0:00:00.282) 0:01:36.389 ***** 2025-09-23 08:03:02.213018 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 08:03:02.213028 | orchestrator | 2025-09-23 08:03:02.213039 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-09-23 08:03:02.213050 | orchestrator | Tuesday 23 September 2025 07:56:16 +0000 (0:00:00.547) 0:01:36.937 ***** 2025-09-23 08:03:02.213061 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.213072 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.213083 | orchestrator | changed: [testbed-node-0] 2025-09-23 08:03:02.213093 | orchestrator | 2025-09-23 08:03:02.213114 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-09-23 08:03:02.213125 | orchestrator | Tuesday 23 September 2025 07:56:18 +0000 (0:00:01.985) 0:01:38.922 ***** 2025-09-23 08:03:02.213136 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.213147 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.213158 | orchestrator | changed: [testbed-node-0] 2025-09-23 08:03:02.213168 | orchestrator | 2025-09-23 08:03:02.213179 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-23 08:03:02.213190 | orchestrator | Tuesday 23 September 2025 07:56:20 +0000 (0:00:02.080) 0:01:41.003 ***** 2025-09-23 08:03:02.213201 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.213234 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.213245 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.213256 | orchestrator | 2025-09-23 08:03:02.213267 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-23 08:03:02.213278 | orchestrator | Tuesday 23 September 2025 07:56:21 +0000 (0:00:00.381) 0:01:41.385 ***** 2025-09-23 08:03:02.213288 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-23 08:03:02.213299 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.213310 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-23 08:03:02.213321 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.213332 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-23 08:03:02.213343 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-09-23 08:03:02.213354 | orchestrator | 2025-09-23 08:03:02.213364 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-23 08:03:02.213375 | orchestrator | Tuesday 23 September 2025 07:56:29 +0000 (0:00:08.600) 0:01:49.985 ***** 2025-09-23 08:03:02.213386 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.213396 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.213407 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.213418 | orchestrator | 2025-09-23 08:03:02.213428 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-23 08:03:02.213439 | orchestrator | Tuesday 23 September 2025 07:56:30 +0000 (0:00:00.594) 0:01:50.580 ***** 2025-09-23 08:03:02.213450 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-23 08:03:02.213461 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.213471 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-23 08:03:02.213482 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.213493 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-23 08:03:02.213504 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.213515 | orchestrator | 2025-09-23 08:03:02.213526 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-23 08:03:02.213536 | orchestrator | Tuesday 23 September 2025 07:56:31 +0000 (0:00:01.140) 0:01:51.721 ***** 2025-09-23 08:03:02.213547 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.213558 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.213569 | orchestrator | changed: [testbed-node-0] 2025-09-23 08:03:02.213579 | orchestrator | 2025-09-23 08:03:02.213590 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-09-23 08:03:02.213601 | orchestrator | Tuesday 23 September 2025 07:56:32 +0000 (0:00:00.602) 0:01:52.324 ***** 2025-09-23 08:03:02.213611 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.213622 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.213640 | orchestrator | changed: [testbed-node-0] 2025-09-23 08:03:02.213651 | orchestrator | 2025-09-23 08:03:02.213662 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-09-23 08:03:02.213672 | orchestrator | Tuesday 23 September 2025 07:56:33 +0000 (0:00:01.036) 0:01:53.360 ***** 2025-09-23 08:03:02.213683 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.213694 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.213714 | orchestrator | changed: [testbed-node-0] 2025-09-23 08:03:02.213733 | orchestrator | 2025-09-23 08:03:02.213744 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-09-23 08:03:02.213754 | orchestrator | Tuesday 23 September 2025 07:56:35 +0000 (0:00:02.072) 0:01:55.433 ***** 2025-09-23 08:03:02.213765 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.213776 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.213787 | orchestrator | ok: [testbed-node-0] 2025-09-23 08:03:02.213797 | orchestrator | 2025-09-23 08:03:02.213808 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-23 08:03:02.213819 | orchestrator | Tuesday 23 September 2025 07:56:56 +0000 (0:00:20.709) 0:02:16.143 ***** 2025-09-23 08:03:02.213830 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.213841 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.213852 | orchestrator | ok: [testbed-node-0] 2025-09-23 08:03:02.213863 | orchestrator | 2025-09-23 08:03:02.213873 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-23 08:03:02.213884 | orchestrator | Tuesday 23 September 2025 07:57:07 +0000 (0:00:11.745) 0:02:27.888 ***** 2025-09-23 08:03:02.213895 | orchestrator | ok: [testbed-node-0] 2025-09-23 08:03:02.213905 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.213916 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.213927 | orchestrator | 2025-09-23 08:03:02.213937 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-09-23 08:03:02.213948 | orchestrator | Tuesday 23 September 2025 07:57:09 +0000 (0:00:01.246) 0:02:29.134 ***** 2025-09-23 08:03:02.213959 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.213969 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.213980 | orchestrator | changed: [testbed-node-0] 2025-09-23 08:03:02.213991 | orchestrator | 2025-09-23 08:03:02.214001 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-09-23 08:03:02.214012 | orchestrator | Tuesday 23 September 2025 07:57:20 +0000 (0:00:11.814) 0:02:40.949 ***** 2025-09-23 08:03:02.214083 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.214095 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.214106 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.214116 | orchestrator | 2025-09-23 08:03:02.214127 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-23 08:03:02.214138 | orchestrator | Tuesday 23 September 2025 07:57:21 +0000 (0:00:01.069) 0:02:42.018 ***** 2025-09-23 08:03:02.214149 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.214160 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.214170 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.214181 | orchestrator | 2025-09-23 08:03:02.214192 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-09-23 08:03:02.214203 | orchestrator | 2025-09-23 08:03:02.214265 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-23 08:03:02.214277 | orchestrator | Tuesday 23 September 2025 07:57:22 +0000 (0:00:00.505) 0:02:42.524 ***** 2025-09-23 08:03:02.214288 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 08:03:02.214301 | orchestrator | 2025-09-23 08:03:02.214311 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-09-23 08:03:02.214322 | orchestrator | Tuesday 23 September 2025 07:57:23 +0000 (0:00:00.614) 0:02:43.138 ***** 2025-09-23 08:03:02.214333 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-09-23 08:03:02.214344 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-09-23 08:03:02.214355 | orchestrator | 2025-09-23 08:03:02.214365 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-09-23 08:03:02.214376 | orchestrator | Tuesday 23 September 2025 07:57:26 +0000 (0:00:03.415) 0:02:46.553 ***** 2025-09-23 08:03:02.214387 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-09-23 08:03:02.214399 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-09-23 08:03:02.214420 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-09-23 08:03:02.214431 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-09-23 08:03:02.214442 | orchestrator | 2025-09-23 08:03:02.214452 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-09-23 08:03:02.214464 | orchestrator | Tuesday 23 September 2025 07:57:33 +0000 (0:00:06.921) 0:02:53.475 ***** 2025-09-23 08:03:02.214474 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-23 08:03:02.214485 | orchestrator | 2025-09-23 08:03:02.214496 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-09-23 08:03:02.214507 | orchestrator | Tuesday 23 September 2025 07:57:36 +0000 (0:00:03.489) 0:02:56.965 ***** 2025-09-23 08:03:02.214517 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-23 08:03:02.214528 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-09-23 08:03:02.214539 | orchestrator | 2025-09-23 08:03:02.214550 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-09-23 08:03:02.214561 | orchestrator | Tuesday 23 September 2025 07:57:40 +0000 (0:00:03.923) 0:03:00.888 ***** 2025-09-23 08:03:02.214571 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-23 08:03:02.214582 | orchestrator | 2025-09-23 08:03:02.214593 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-09-23 08:03:02.214604 | orchestrator | Tuesday 23 September 2025 07:57:44 +0000 (0:00:03.697) 0:03:04.586 ***** 2025-09-23 08:03:02.214620 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-09-23 08:03:02.214631 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-09-23 08:03:02.214642 | orchestrator | 2025-09-23 08:03:02.214653 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-23 08:03:02.214672 | orchestrator | Tuesday 23 September 2025 07:57:52 +0000 (0:00:07.918) 0:03:12.504 ***** 2025-09-23 08:03:02.214689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-23 08:03:02.214705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-23 08:03:02.214726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-23 08:03:02.214753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.214767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.214778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.214790 | orchestrator | 2025-09-23 08:03:02.214801 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-09-23 08:03:02.214812 | orchestrator | Tuesday 23 September 2025 07:57:53 +0000 (0:00:01.307) 0:03:13.812 ***** 2025-09-23 08:03:02.214823 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.214834 | orchestrator | 2025-09-23 08:03:02.214845 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-09-23 08:03:02.214864 | orchestrator | Tuesday 23 September 2025 07:57:53 +0000 (0:00:00.148) 0:03:13.960 ***** 2025-09-23 08:03:02.214874 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.214885 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.214895 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.214906 | orchestrator | 2025-09-23 08:03:02.214917 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-09-23 08:03:02.214928 | orchestrator | Tuesday 23 September 2025 07:57:54 +0000 (0:00:00.342) 0:03:14.303 ***** 2025-09-23 08:03:02.214938 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-23 08:03:02.214949 | orchestrator | 2025-09-23 08:03:02.214960 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-09-23 08:03:02.214971 | orchestrator | Tuesday 23 September 2025 07:57:54 +0000 (0:00:00.718) 0:03:15.022 ***** 2025-09-23 08:03:02.214982 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.214993 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.215003 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.215014 | orchestrator | 2025-09-23 08:03:02.215025 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-23 08:03:02.215036 | orchestrator | Tuesday 23 September 2025 07:57:55 +0000 (0:00:00.539) 0:03:15.561 ***** 2025-09-23 08:03:02.215047 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 08:03:02.215057 | orchestrator | 2025-09-23 08:03:02.215068 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-23 08:03:02.215079 | orchestrator | Tuesday 23 September 2025 07:57:56 +0000 (0:00:00.556) 0:03:16.118 ***** 2025-09-23 08:03:02.215095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-23 08:03:02.215128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-23 08:03:02.215150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-23 08:03:02.215163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.215175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.215198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.215226 | orchestrator | 2025-09-23 08:03:02.215238 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-23 08:03:02.215249 | orchestrator | Tuesday 23 September 2025 07:57:58 +0000 (0:00:02.296) 0:03:18.415 ***** 2025-09-23 08:03:02.215261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-23 08:03:02.215288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-23 08:03:02.215300 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.215484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-23 08:03:02.215511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-23 08:03:02.215523 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.215546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-23 08:03:02.215568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-23 08:03:02.215580 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.215591 | orchestrator | 2025-09-23 08:03:02.215602 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-23 08:03:02.215613 | orchestrator | Tuesday 23 September 2025 07:57:59 +0000 (0:00:00.759) 0:03:19.174 ***** 2025-09-23 08:03:02.215625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-23 08:03:02.215636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-23 08:03:02.215648 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.215672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/',2025-09-23 08:03:02 | INFO  | Task 0f67f400-9452-48b7-ae82-50568900f456 is in state SUCCESS 2025-09-23 08:03:02.215686 | orchestrator | '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-23 08:03:02.215705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-23 08:03:02.215716 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.215728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-23 08:03:02.215740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-23 08:03:02.215751 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.215762 | orchestrator | 2025-09-23 08:03:02.215773 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-09-23 08:03:02.215784 | orchestrator | Tuesday 23 September 2025 07:57:59 +0000 (0:00:00.682) 0:03:19.857 ***** 2025-09-23 08:03:02.215811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-23 08:03:02.215830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-23 08:03:02.215843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-23 08:03:02.215855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.215879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.215899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.215911 | orchestrator | 2025-09-23 08:03:02.215922 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-09-23 08:03:02.215933 | orchestrator | Tuesday 23 September 2025 07:58:02 +0000 (0:00:02.343) 0:03:22.200 ***** 2025-09-23 08:03:02.215945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-23 08:03:02.215957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-23 08:03:02.215982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-23 08:03:02.216002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.216013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.216025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.216036 | orchestrator | 2025-09-23 08:03:02.216047 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-09-23 08:03:02.216058 | orchestrator | Tuesday 23 September 2025 07:58:07 +0000 (0:00:05.290) 0:03:27.490 ***** 2025-09-23 08:03:02.216069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-23 08:03:02.216099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-23 08:03:02.216111 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.216123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-23 08:03:02.216135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-23 08:03:02.216146 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.216158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-23 08:03:02.216174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-23 08:03:02.216192 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.216203 | orchestrator | 2025-09-23 08:03:02.216237 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-09-23 08:03:02.216248 | orchestrator | Tuesday 23 September 2025 07:58:08 +0000 (0:00:00.558) 0:03:28.049 ***** 2025-09-23 08:03:02.216266 | orchestrator | changed: [testbed-node-0] 2025-09-23 08:03:02.216277 | orchestrator | changed: [testbed-node-1] 2025-09-23 08:03:02.216288 | orchestrator | changed: [testbed-node-2] 2025-09-23 08:03:02.216299 | orchestrator | 2025-09-23 08:03:02.216310 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-09-23 08:03:02.216321 | orchestrator | Tuesday 23 September 2025 07:58:09 +0000 (0:00:01.442) 0:03:29.492 ***** 2025-09-23 08:03:02.216331 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.216342 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.216353 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.216364 | orchestrator | 2025-09-23 08:03:02.216375 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-09-23 08:03:02.216386 | orchestrator | Tuesday 23 September 2025 07:58:09 +0000 (0:00:00.293) 0:03:29.785 ***** 2025-09-23 08:03:02.216397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-23 08:03:02.216410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-23 08:03:02.216446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-23 08:03:02.216460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.216472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.216484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.216495 | orchestrator | 2025-09-23 08:03:02.216506 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-23 08:03:02.216517 | orchestrator | Tuesday 23 September 2025 07:58:11 +0000 (0:00:02.060) 0:03:31.845 ***** 2025-09-23 08:03:02.216528 | orchestrator | 2025-09-23 08:03:02.216539 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-23 08:03:02.216550 | orchestrator | Tuesday 23 September 2025 07:58:11 +0000 (0:00:00.119) 0:03:31.965 ***** 2025-09-23 08:03:02.216561 | orchestrator | 2025-09-23 08:03:02.216572 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-23 08:03:02.216582 | orchestrator | Tuesday 23 September 2025 07:58:12 +0000 (0:00:00.122) 0:03:32.088 ***** 2025-09-23 08:03:02.216593 | orchestrator | 2025-09-23 08:03:02.216604 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-09-23 08:03:02.216621 | orchestrator | Tuesday 23 September 2025 07:58:12 +0000 (0:00:00.126) 0:03:32.214 ***** 2025-09-23 08:03:02.216632 | orchestrator | changed: [testbed-node-0] 2025-09-23 08:03:02.216643 | orchestrator | changed: [testbed-node-1] 2025-09-23 08:03:02.216654 | orchestrator | changed: [testbed-node-2] 2025-09-23 08:03:02.216665 | orchestrator | 2025-09-23 08:03:02.216676 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-09-23 08:03:02.216686 | orchestrator | Tuesday 23 September 2025 07:58:32 +0000 (0:00:19.889) 0:03:52.104 ***** 2025-09-23 08:03:02.216697 | orchestrator | changed: [testbed-node-0] 2025-09-23 08:03:02.216708 | orchestrator | changed: [testbed-node-2] 2025-09-23 08:03:02.216719 | orchestrator | changed: [testbed-node-1] 2025-09-23 08:03:02.216730 | orchestrator | 2025-09-23 08:03:02.216741 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-09-23 08:03:02.216751 | orchestrator | 2025-09-23 08:03:02.216762 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-23 08:03:02.216773 | orchestrator | Tuesday 23 September 2025 07:58:39 +0000 (0:00:07.140) 0:03:59.244 ***** 2025-09-23 08:03:02.216784 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 08:03:02.216795 | orchestrator | 2025-09-23 08:03:02.216806 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-23 08:03:02.216817 | orchestrator | Tuesday 23 September 2025 07:58:40 +0000 (0:00:01.189) 0:04:00.434 ***** 2025-09-23 08:03:02.216828 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:03:02.216839 | orchestrator | skipping: [testbed-node-4] 2025-09-23 08:03:02.216849 | orchestrator | skipping: [testbed-node-5] 2025-09-23 08:03:02.216860 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.216871 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.216881 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.216892 | orchestrator | 2025-09-23 08:03:02.216903 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-09-23 08:03:02.216918 | orchestrator | Tuesday 23 September 2025 07:58:40 +0000 (0:00:00.604) 0:04:01.038 ***** 2025-09-23 08:03:02.216929 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.216940 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.216951 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.216962 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 08:03:02.216973 | orchestrator | 2025-09-23 08:03:02.216989 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-23 08:03:02.217000 | orchestrator | Tuesday 23 September 2025 07:58:42 +0000 (0:00:01.024) 0:04:02.062 ***** 2025-09-23 08:03:02.217011 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-09-23 08:03:02.217022 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-09-23 08:03:02.217033 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-09-23 08:03:02.217044 | orchestrator | 2025-09-23 08:03:02.217055 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-23 08:03:02.217066 | orchestrator | Tuesday 23 September 2025 07:58:42 +0000 (0:00:00.635) 0:04:02.697 ***** 2025-09-23 08:03:02.217077 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-09-23 08:03:02.217087 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-09-23 08:03:02.217098 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-09-23 08:03:02.217109 | orchestrator | 2025-09-23 08:03:02.217120 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-23 08:03:02.217130 | orchestrator | Tuesday 23 September 2025 07:58:43 +0000 (0:00:01.136) 0:04:03.834 ***** 2025-09-23 08:03:02.217141 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-09-23 08:03:02.217152 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:03:02.217163 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-09-23 08:03:02.217180 | orchestrator | skipping: [testbed-node-4] 2025-09-23 08:03:02.217191 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-09-23 08:03:02.217202 | orchestrator | skipping: [testbed-node-5] 2025-09-23 08:03:02.217231 | orchestrator | 2025-09-23 08:03:02.217242 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-09-23 08:03:02.217253 | orchestrator | Tuesday 23 September 2025 07:58:44 +0000 (0:00:00.725) 0:04:04.560 ***** 2025-09-23 08:03:02.217264 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-23 08:03:02.217275 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-23 08:03:02.217285 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.217296 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-23 08:03:02.217307 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-23 08:03:02.217318 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.217329 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-23 08:03:02.217340 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-23 08:03:02.217351 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-23 08:03:02.217362 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-23 08:03:02.217373 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-23 08:03:02.217383 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.217394 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-23 08:03:02.217405 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-23 08:03:02.217416 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-23 08:03:02.217427 | orchestrator | 2025-09-23 08:03:02.217437 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-09-23 08:03:02.217448 | orchestrator | Tuesday 23 September 2025 07:58:46 +0000 (0:00:02.041) 0:04:06.601 ***** 2025-09-23 08:03:02.217459 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.217470 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.217481 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.217492 | orchestrator | changed: [testbed-node-3] 2025-09-23 08:03:02.217503 | orchestrator | changed: [testbed-node-4] 2025-09-23 08:03:02.217513 | orchestrator | changed: [testbed-node-5] 2025-09-23 08:03:02.217524 | orchestrator | 2025-09-23 08:03:02.217535 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-09-23 08:03:02.217546 | orchestrator | Tuesday 23 September 2025 07:58:47 +0000 (0:00:01.405) 0:04:08.007 ***** 2025-09-23 08:03:02.217556 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.217567 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.217578 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.217588 | orchestrator | changed: [testbed-node-5] 2025-09-23 08:03:02.217599 | orchestrator | changed: [testbed-node-3] 2025-09-23 08:03:02.217610 | orchestrator | changed: [testbed-node-4] 2025-09-23 08:03:02.217620 | orchestrator | 2025-09-23 08:03:02.217631 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-23 08:03:02.217642 | orchestrator | Tuesday 23 September 2025 07:58:49 +0000 (0:00:01.601) 0:04:09.609 ***** 2025-09-23 08:03:02.217666 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-23 08:03:02.217687 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-23 08:03:02.217698 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-23 08:03:02.217709 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-23 08:03:02.217721 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-23 08:03:02.217733 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-23 08:03:02.217754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-23 08:03:02.217773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-23 08:03:02.217785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-23 08:03:02.217797 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.217808 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.217820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.217836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.217861 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.217873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.217885 | orchestrator | 2025-09-23 08:03:02.217896 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-23 08:03:02.217907 | orchestrator | Tuesday 23 September 2025 07:58:51 +0000 (0:00:02.207) 0:04:11.817 ***** 2025-09-23 08:03:02.217918 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 08:03:02.217930 | orchestrator | 2025-09-23 08:03:02.217941 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-23 08:03:02.217952 | orchestrator | Tuesday 23 September 2025 07:58:52 +0000 (0:00:01.222) 0:04:13.039 ***** 2025-09-23 08:03:02.217963 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-23 08:03:02.217975 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-23 08:03:02.218004 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-23 08:03:02.218047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-23 08:03:02.218061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-23 08:03:02.218074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-23 08:03:02.218085 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-23 08:03:02.218096 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-23 08:03:02.218114 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-23 08:03:02.218139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.218151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.218162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.218174 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.218185 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.218723 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.218744 | orchestrator | 2025-09-23 08:03:02.218754 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-23 08:03:02.218764 | orchestrator | Tuesday 23 September 2025 07:58:56 +0000 (0:00:03.412) 0:04:16.451 ***** 2025-09-23 08:03:02.218775 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-23 08:03:02.218786 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-23 08:03:02.218796 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-23 08:03:02.218805 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:03:02.218816 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-23 08:03:02.218842 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-23 08:03:02.218857 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-23 08:03:02.218867 | orchestrator | skipping: [testbed-node-5] 2025-09-23 08:03:02.218877 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-23 08:03:02.218887 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-23 08:03:02.218897 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-23 08:03:02.218912 | orchestrator | skipping: [testbed-node-4] 2025-09-23 08:03:02.218928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-23 08:03:02.218944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-23 08:03:02.218954 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.218964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-23 08:03:02.218974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-23 08:03:02.218984 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.218994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-23 08:03:02.219004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-23 08:03:02.219062 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.219073 | orchestrator | 2025-09-23 08:03:02.219083 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-23 08:03:02.219092 | orchestrator | Tuesday 23 September 2025 07:58:57 +0000 (0:00:01.591) 0:04:18.043 ***** 2025-09-23 08:03:02.219108 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-23 08:03:02.219123 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-23 08:03:02.219133 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-23 08:03:02.219143 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:03:02.219153 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-23 08:03:02.219163 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-23 08:03:02.219179 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-23 08:03:02.219189 | orchestrator | skipping: [testbed-node-4] 2025-09-23 08:03:02.219205 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-23 08:03:02.219243 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-23 08:03:02.219254 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-23 08:03:02.219264 | orchestrator | skipping: [testbed-node-5] 2025-09-23 08:03:02.219274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-23 08:03:02.219294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-23 08:03:02.219305 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.219315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-23 08:03:02.219332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-23 08:03:02.219344 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.219360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-23 08:03:02.219373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-23 08:03:02.219384 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.219397 | orchestrator | 2025-09-23 08:03:02.219408 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-23 08:03:02.219420 | orchestrator | Tuesday 23 September 2025 07:59:00 +0000 (0:00:02.112) 0:04:20.155 ***** 2025-09-23 08:03:02.219431 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.219442 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.219454 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.219465 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-23 08:03:02.219476 | orchestrator | 2025-09-23 08:03:02.219488 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-09-23 08:03:02.219507 | orchestrator | Tuesday 23 September 2025 07:59:01 +0000 (0:00:00.920) 0:04:21.076 ***** 2025-09-23 08:03:02.219518 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-23 08:03:02.219529 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-23 08:03:02.219541 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-23 08:03:02.219552 | orchestrator | 2025-09-23 08:03:02.219564 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-09-23 08:03:02.219575 | orchestrator | Tuesday 23 September 2025 07:59:01 +0000 (0:00:00.829) 0:04:21.905 ***** 2025-09-23 08:03:02.219586 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-23 08:03:02.219597 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-23 08:03:02.219609 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-23 08:03:02.219620 | orchestrator | 2025-09-23 08:03:02.219631 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-09-23 08:03:02.219642 | orchestrator | Tuesday 23 September 2025 07:59:02 +0000 (0:00:00.820) 0:04:22.725 ***** 2025-09-23 08:03:02.219654 | orchestrator | ok: [testbed-node-3] 2025-09-23 08:03:02.219665 | orchestrator | ok: [testbed-node-4] 2025-09-23 08:03:02.219677 | orchestrator | ok: [testbed-node-5] 2025-09-23 08:03:02.219688 | orchestrator | 2025-09-23 08:03:02.219698 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-09-23 08:03:02.219708 | orchestrator | Tuesday 23 September 2025 07:59:03 +0000 (0:00:00.526) 0:04:23.252 ***** 2025-09-23 08:03:02.219718 | orchestrator | ok: [testbed-node-3] 2025-09-23 08:03:02.219727 | orchestrator | ok: [testbed-node-4] 2025-09-23 08:03:02.219737 | orchestrator | ok: [testbed-node-5] 2025-09-23 08:03:02.219746 | orchestrator | 2025-09-23 08:03:02.219756 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-09-23 08:03:02.219765 | orchestrator | Tuesday 23 September 2025 07:59:04 +0000 (0:00:00.987) 0:04:24.240 ***** 2025-09-23 08:03:02.219775 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-23 08:03:02.219784 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-23 08:03:02.219794 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-23 08:03:02.219803 | orchestrator | 2025-09-23 08:03:02.219813 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-09-23 08:03:02.219822 | orchestrator | Tuesday 23 September 2025 07:59:05 +0000 (0:00:01.297) 0:04:25.538 ***** 2025-09-23 08:03:02.219832 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-23 08:03:02.219842 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-23 08:03:02.219851 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-23 08:03:02.219861 | orchestrator | 2025-09-23 08:03:02.219871 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-09-23 08:03:02.219880 | orchestrator | Tuesday 23 September 2025 07:59:06 +0000 (0:00:01.263) 0:04:26.802 ***** 2025-09-23 08:03:02.219890 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-23 08:03:02.219899 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-23 08:03:02.219914 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-23 08:03:02.219924 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-09-23 08:03:02.219933 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-09-23 08:03:02.219942 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-09-23 08:03:02.219952 | orchestrator | 2025-09-23 08:03:02.219961 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-09-23 08:03:02.219971 | orchestrator | Tuesday 23 September 2025 07:59:10 +0000 (0:00:03.933) 0:04:30.735 ***** 2025-09-23 08:03:02.219980 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:03:02.219990 | orchestrator | skipping: [testbed-node-4] 2025-09-23 08:03:02.220000 | orchestrator | skipping: [testbed-node-5] 2025-09-23 08:03:02.220009 | orchestrator | 2025-09-23 08:03:02.220019 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-09-23 08:03:02.220035 | orchestrator | Tuesday 23 September 2025 07:59:11 +0000 (0:00:00.541) 0:04:31.276 ***** 2025-09-23 08:03:02.220045 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:03:02.220059 | orchestrator | skipping: [testbed-node-4] 2025-09-23 08:03:02.220069 | orchestrator | skipping: [testbed-node-5] 2025-09-23 08:03:02.220078 | orchestrator | 2025-09-23 08:03:02.220087 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-09-23 08:03:02.220097 | orchestrator | Tuesday 23 September 2025 07:59:11 +0000 (0:00:00.308) 0:04:31.585 ***** 2025-09-23 08:03:02.220106 | orchestrator | changed: [testbed-node-4] 2025-09-23 08:03:02.220116 | orchestrator | changed: [testbed-node-3] 2025-09-23 08:03:02.220126 | orchestrator | changed: [testbed-node-5] 2025-09-23 08:03:02.220135 | orchestrator | 2025-09-23 08:03:02.220145 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-09-23 08:03:02.220154 | orchestrator | Tuesday 23 September 2025 07:59:12 +0000 (0:00:01.232) 0:04:32.817 ***** 2025-09-23 08:03:02.220164 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-23 08:03:02.220174 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-23 08:03:02.220184 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-23 08:03:02.220193 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-23 08:03:02.220203 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-23 08:03:02.220227 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-23 08:03:02.220237 | orchestrator | 2025-09-23 08:03:02.220247 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-09-23 08:03:02.220256 | orchestrator | Tuesday 23 September 2025 07:59:16 +0000 (0:00:03.375) 0:04:36.193 ***** 2025-09-23 08:03:02.220266 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-23 08:03:02.220276 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-23 08:03:02.220286 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-23 08:03:02.220295 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-23 08:03:02.220305 | orchestrator | changed: [testbed-node-3] 2025-09-23 08:03:02.220314 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-23 08:03:02.220324 | orchestrator | changed: [testbed-node-4] 2025-09-23 08:03:02.220334 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-23 08:03:02.220343 | orchestrator | changed: [testbed-node-5] 2025-09-23 08:03:02.220353 | orchestrator | 2025-09-23 08:03:02.220362 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-09-23 08:03:02.220372 | orchestrator | Tuesday 23 September 2025 07:59:19 +0000 (0:00:03.590) 0:04:39.784 ***** 2025-09-23 08:03:02.220381 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:03:02.220391 | orchestrator | 2025-09-23 08:03:02.220401 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-09-23 08:03:02.220410 | orchestrator | Tuesday 23 September 2025 07:59:19 +0000 (0:00:00.135) 0:04:39.919 ***** 2025-09-23 08:03:02.220420 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:03:02.220429 | orchestrator | skipping: [testbed-node-4] 2025-09-23 08:03:02.220439 | orchestrator | skipping: [testbed-node-5] 2025-09-23 08:03:02.220448 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.220458 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.220467 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.220477 | orchestrator | 2025-09-23 08:03:02.220486 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-09-23 08:03:02.220505 | orchestrator | Tuesday 23 September 2025 07:59:20 +0000 (0:00:00.606) 0:04:40.526 ***** 2025-09-23 08:03:02.220515 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-23 08:03:02.220525 | orchestrator | 2025-09-23 08:03:02.220534 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-09-23 08:03:02.220544 | orchestrator | Tuesday 23 September 2025 07:59:21 +0000 (0:00:00.661) 0:04:41.188 ***** 2025-09-23 08:03:02.220553 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:03:02.220563 | orchestrator | skipping: [testbed-node-4] 2025-09-23 08:03:02.220572 | orchestrator | skipping: [testbed-node-5] 2025-09-23 08:03:02.220582 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.220591 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.220601 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.220610 | orchestrator | 2025-09-23 08:03:02.220620 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-09-23 08:03:02.220629 | orchestrator | Tuesday 23 September 2025 07:59:21 +0000 (0:00:00.751) 0:04:41.939 ***** 2025-09-23 08:03:02.220650 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-23 08:03:02.220661 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-23 08:03:02.220672 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-23 08:03:02.220682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-23 08:03:02.220699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-23 08:03:02.220713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-23 08:03:02.220728 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-23 08:03:02.220738 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-23 08:03:02.220748 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-23 08:03:02.220758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.220769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.220785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.220801 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.220816 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.220826 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.220836 | orchestrator | 2025-09-23 08:03:02.220846 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-09-23 08:03:02.220855 | orchestrator | Tuesday 23 September 2025 07:59:25 +0000 (0:00:03.748) 0:04:45.688 ***** 2025-09-23 08:03:02.220865 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-23 08:03:02.220881 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-23 08:03:02.220896 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-23 08:03:02.220911 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-23 08:03:02.220921 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-23 08:03:02.220931 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-23 08:03:02.220947 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.220957 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.220972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-23 08:03:02.220987 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.220997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-23 08:03:02.221007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-23 08:03:02.221022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.221032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.221047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.221057 | orchestrator | 2025-09-23 08:03:02.221067 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-09-23 08:03:02.221076 | orchestrator | Tuesday 23 September 2025 07:59:32 +0000 (0:00:06.618) 0:04:52.306 ***** 2025-09-23 08:03:02.221086 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:03:02.221095 | orchestrator | skipping: [testbed-node-4] 2025-09-23 08:03:02.221105 | orchestrator | skipping: [testbed-node-5] 2025-09-23 08:03:02.221114 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.221124 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.221133 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.221142 | orchestrator | 2025-09-23 08:03:02.221152 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-09-23 08:03:02.221161 | orchestrator | Tuesday 23 September 2025 07:59:33 +0000 (0:00:01.302) 0:04:53.609 ***** 2025-09-23 08:03:02.221175 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-23 08:03:02.221185 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-23 08:03:02.221194 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-23 08:03:02.221204 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-23 08:03:02.221265 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-23 08:03:02.221275 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-23 08:03:02.221284 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-23 08:03:02.221294 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.221310 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-23 08:03:02.221320 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.221329 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-23 08:03:02.221339 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.221348 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-23 08:03:02.221358 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-23 08:03:02.221368 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-23 08:03:02.221377 | orchestrator | 2025-09-23 08:03:02.221387 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-09-23 08:03:02.221397 | orchestrator | Tuesday 23 September 2025 07:59:37 +0000 (0:00:03.670) 0:04:57.279 ***** 2025-09-23 08:03:02.221406 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:03:02.221416 | orchestrator | skipping: [testbed-node-4] 2025-09-23 08:03:02.221425 | orchestrator | skipping: [testbed-node-5] 2025-09-23 08:03:02.221435 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.221444 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.221453 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.221463 | orchestrator | 2025-09-23 08:03:02.221472 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-09-23 08:03:02.221482 | orchestrator | Tuesday 23 September 2025 07:59:37 +0000 (0:00:00.579) 0:04:57.859 ***** 2025-09-23 08:03:02.221491 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-23 08:03:02.221501 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-23 08:03:02.221510 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-23 08:03:02.221520 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-23 08:03:02.221530 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-23 08:03:02.221539 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-23 08:03:02.221549 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-23 08:03:02.221558 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-23 08:03:02.221567 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-23 08:03:02.221577 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-23 08:03:02.221586 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.221596 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-23 08:03:02.221605 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.221615 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-23 08:03:02.221625 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.221640 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-23 08:03:02.221650 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-23 08:03:02.221659 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-23 08:03:02.221675 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-23 08:03:02.221684 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-23 08:03:02.221693 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-23 08:03:02.221703 | orchestrator | 2025-09-23 08:03:02.221717 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-09-23 08:03:02.221727 | orchestrator | Tuesday 23 September 2025 07:59:43 +0000 (0:00:05.637) 0:05:03.496 ***** 2025-09-23 08:03:02.221737 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-23 08:03:02.221746 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-23 08:03:02.221756 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-23 08:03:02.221765 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-23 08:03:02.221774 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-23 08:03:02.221784 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-23 08:03:02.221793 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-23 08:03:02.221803 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-23 08:03:02.221812 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-23 08:03:02.221822 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-23 08:03:02.221831 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-23 08:03:02.221838 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-23 08:03:02.221846 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-23 08:03:02.221854 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.221861 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-23 08:03:02.221869 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.221877 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-23 08:03:02.221884 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.221892 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-23 08:03:02.221900 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-23 08:03:02.221908 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-23 08:03:02.221916 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-23 08:03:02.221923 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-23 08:03:02.221931 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-23 08:03:02.221939 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-23 08:03:02.221947 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-23 08:03:02.221954 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-23 08:03:02.221962 | orchestrator | 2025-09-23 08:03:02.221970 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-09-23 08:03:02.221977 | orchestrator | Tuesday 23 September 2025 07:59:50 +0000 (0:00:07.246) 0:05:10.743 ***** 2025-09-23 08:03:02.221985 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:03:02.222000 | orchestrator | skipping: [testbed-node-4] 2025-09-23 08:03:02.222008 | orchestrator | skipping: [testbed-node-5] 2025-09-23 08:03:02.222050 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.222060 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.222068 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.222076 | orchestrator | 2025-09-23 08:03:02.222084 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-09-23 08:03:02.222092 | orchestrator | Tuesday 23 September 2025 07:59:51 +0000 (0:00:00.829) 0:05:11.572 ***** 2025-09-23 08:03:02.222100 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:03:02.222108 | orchestrator | skipping: [testbed-node-4] 2025-09-23 08:03:02.222115 | orchestrator | skipping: [testbed-node-5] 2025-09-23 08:03:02.222123 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.222130 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.222138 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.222146 | orchestrator | 2025-09-23 08:03:02.222154 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-09-23 08:03:02.222166 | orchestrator | Tuesday 23 September 2025 07:59:52 +0000 (0:00:00.604) 0:05:12.177 ***** 2025-09-23 08:03:02.222174 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.222182 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.222190 | orchestrator | changed: [testbed-node-3] 2025-09-23 08:03:02.222198 | orchestrator | changed: [testbed-node-4] 2025-09-23 08:03:02.222206 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.222226 | orchestrator | changed: [testbed-node-5] 2025-09-23 08:03:02.222233 | orchestrator | 2025-09-23 08:03:02.222241 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-09-23 08:03:02.222249 | orchestrator | Tuesday 23 September 2025 07:59:54 +0000 (0:00:01.997) 0:05:14.175 ***** 2025-09-23 08:03:02.222262 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-23 08:03:02.222271 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-23 08:03:02.222279 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-23 08:03:02.222294 | orchestrator | skipping: [testbed-node-4] 2025-09-23 08:03:02.222302 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-23 08:03:02.222314 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-23 08:03:02.222327 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-23 08:03:02.222335 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:03:02.222343 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-23 08:03:02.222352 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-23 08:03:02.222365 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-23 08:03:02.222373 | orchestrator | skipping: [testbed-node-5] 2025-09-23 08:03:02.222381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-23 08:03:02.222395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-23 08:03:02.222404 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.222415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-23 08:03:02.222423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-23 08:03:02.222431 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.222439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-23 08:03:02.222452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-23 08:03:02.222460 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.222468 | orchestrator | 2025-09-23 08:03:02.222476 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-09-23 08:03:02.222484 | orchestrator | Tuesday 23 September 2025 07:59:55 +0000 (0:00:01.509) 0:05:15.684 ***** 2025-09-23 08:03:02.222491 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-23 08:03:02.222499 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-23 08:03:02.222507 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:03:02.222515 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-23 08:03:02.222522 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-23 08:03:02.222530 | orchestrator | skipping: [testbed-node-4] 2025-09-23 08:03:02.222538 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-23 08:03:02.222546 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-23 08:03:02.222553 | orchestrator | skipping: [testbed-node-5] 2025-09-23 08:03:02.222561 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-23 08:03:02.222569 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-23 08:03:02.222577 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.222585 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-23 08:03:02.222592 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-23 08:03:02.222600 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.222608 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-23 08:03:02.222616 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-23 08:03:02.222623 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.222631 | orchestrator | 2025-09-23 08:03:02.222639 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-09-23 08:03:02.222647 | orchestrator | Tuesday 23 September 2025 07:59:56 +0000 (0:00:00.940) 0:05:16.624 ***** 2025-09-23 08:03:02.222664 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-23 08:03:02.222673 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-23 08:03:02.222688 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-23 08:03:02.222696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-23 08:03:02.222704 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-23 08:03:02.222718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-23 08:03:02.222730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-23 08:03:02.222738 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-23 08:03:02.222751 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-23 08:03:02.222760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.222768 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.222776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.222789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 2025-09-23 08:03:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-23 08:03:02.222799 | orchestrator | 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.222812 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.222825 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-23 08:03:02.222834 | orchestrator | 2025-09-23 08:03:02.222842 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-23 08:03:02.222849 | orchestrator | Tuesday 23 September 2025 07:59:59 +0000 (0:00:02.834) 0:05:19.458 ***** 2025-09-23 08:03:02.222857 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:03:02.222865 | orchestrator | skipping: [testbed-node-4] 2025-09-23 08:03:02.222873 | orchestrator | skipping: [testbed-node-5] 2025-09-23 08:03:02.222881 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.222889 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.222896 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.222904 | orchestrator | 2025-09-23 08:03:02.222912 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-23 08:03:02.222919 | orchestrator | Tuesday 23 September 2025 08:00:00 +0000 (0:00:00.757) 0:05:20.216 ***** 2025-09-23 08:03:02.222927 | orchestrator | 2025-09-23 08:03:02.222935 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-23 08:03:02.222942 | orchestrator | Tuesday 23 September 2025 08:00:00 +0000 (0:00:00.132) 0:05:20.349 ***** 2025-09-23 08:03:02.222950 | orchestrator | 2025-09-23 08:03:02.222958 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-23 08:03:02.222965 | orchestrator | Tuesday 23 September 2025 08:00:00 +0000 (0:00:00.132) 0:05:20.481 ***** 2025-09-23 08:03:02.222973 | orchestrator | 2025-09-23 08:03:02.222981 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-23 08:03:02.222988 | orchestrator | Tuesday 23 September 2025 08:00:00 +0000 (0:00:00.128) 0:05:20.610 ***** 2025-09-23 08:03:02.222996 | orchestrator | 2025-09-23 08:03:02.223004 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-23 08:03:02.223011 | orchestrator | Tuesday 23 September 2025 08:00:00 +0000 (0:00:00.126) 0:05:20.736 ***** 2025-09-23 08:03:02.223019 | orchestrator | 2025-09-23 08:03:02.223027 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-23 08:03:02.223035 | orchestrator | Tuesday 23 September 2025 08:00:00 +0000 (0:00:00.141) 0:05:20.877 ***** 2025-09-23 08:03:02.223042 | orchestrator | 2025-09-23 08:03:02.223050 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-09-23 08:03:02.223058 | orchestrator | Tuesday 23 September 2025 08:00:01 +0000 (0:00:00.292) 0:05:21.170 ***** 2025-09-23 08:03:02.223065 | orchestrator | changed: [testbed-node-0] 2025-09-23 08:03:02.223073 | orchestrator | changed: [testbed-node-2] 2025-09-23 08:03:02.223081 | orchestrator | changed: [testbed-node-1] 2025-09-23 08:03:02.223089 | orchestrator | 2025-09-23 08:03:02.223096 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-09-23 08:03:02.223104 | orchestrator | Tuesday 23 September 2025 08:00:08 +0000 (0:00:07.396) 0:05:28.567 ***** 2025-09-23 08:03:02.223117 | orchestrator | changed: [testbed-node-0] 2025-09-23 08:03:02.223124 | orchestrator | changed: [testbed-node-2] 2025-09-23 08:03:02.223132 | orchestrator | changed: [testbed-node-1] 2025-09-23 08:03:02.223140 | orchestrator | 2025-09-23 08:03:02.223152 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-09-23 08:03:02.223161 | orchestrator | Tuesday 23 September 2025 08:00:20 +0000 (0:00:11.812) 0:05:40.380 ***** 2025-09-23 08:03:02.223168 | orchestrator | changed: [testbed-node-3] 2025-09-23 08:03:02.223176 | orchestrator | changed: [testbed-node-5] 2025-09-23 08:03:02.223184 | orchestrator | changed: [testbed-node-4] 2025-09-23 08:03:02.223191 | orchestrator | 2025-09-23 08:03:02.223199 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-09-23 08:03:02.223220 | orchestrator | Tuesday 23 September 2025 08:00:44 +0000 (0:00:24.198) 0:06:04.579 ***** 2025-09-23 08:03:02.223228 | orchestrator | changed: [testbed-node-4] 2025-09-23 08:03:02.223236 | orchestrator | changed: [testbed-node-5] 2025-09-23 08:03:02.223244 | orchestrator | changed: [testbed-node-3] 2025-09-23 08:03:02.223252 | orchestrator | 2025-09-23 08:03:02.223259 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-09-23 08:03:02.223267 | orchestrator | Tuesday 23 September 2025 08:01:19 +0000 (0:00:35.269) 0:06:39.849 ***** 2025-09-23 08:03:02.223278 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2025-09-23 08:03:02.223286 | orchestrator | changed: [testbed-node-4] 2025-09-23 08:03:02.223294 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2025-09-23 08:03:02.223302 | orchestrator | changed: [testbed-node-3] 2025-09-23 08:03:02.223310 | orchestrator | changed: [testbed-node-5] 2025-09-23 08:03:02.223318 | orchestrator | 2025-09-23 08:03:02.223325 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-09-23 08:03:02.223333 | orchestrator | Tuesday 23 September 2025 08:01:26 +0000 (0:00:06.515) 0:06:46.364 ***** 2025-09-23 08:03:02.223341 | orchestrator | changed: [testbed-node-3] 2025-09-23 08:03:02.223348 | orchestrator | changed: [testbed-node-4] 2025-09-23 08:03:02.223356 | orchestrator | changed: [testbed-node-5] 2025-09-23 08:03:02.223364 | orchestrator | 2025-09-23 08:03:02.223372 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-09-23 08:03:02.223379 | orchestrator | Tuesday 23 September 2025 08:01:27 +0000 (0:00:00.824) 0:06:47.189 ***** 2025-09-23 08:03:02.223387 | orchestrator | changed: [testbed-node-3] 2025-09-23 08:03:02.223395 | orchestrator | changed: [testbed-node-5] 2025-09-23 08:03:02.223403 | orchestrator | changed: [testbed-node-4] 2025-09-23 08:03:02.223410 | orchestrator | 2025-09-23 08:03:02.223421 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-09-23 08:03:02.223435 | orchestrator | Tuesday 23 September 2025 08:01:53 +0000 (0:00:26.423) 0:07:13.612 ***** 2025-09-23 08:03:02.223448 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:03:02.223460 | orchestrator | 2025-09-23 08:03:02.223473 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-09-23 08:03:02.223486 | orchestrator | Tuesday 23 September 2025 08:01:53 +0000 (0:00:00.123) 0:07:13.735 ***** 2025-09-23 08:03:02.223500 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:03:02.223508 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.223516 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.223523 | orchestrator | skipping: [testbed-node-5] 2025-09-23 08:03:02.223531 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.223539 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-09-23 08:03:02.223547 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-09-23 08:03:02.223555 | orchestrator | 2025-09-23 08:03:02.223562 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-09-23 08:03:02.223583 | orchestrator | Tuesday 23 September 2025 08:02:14 +0000 (0:00:21.163) 0:07:34.899 ***** 2025-09-23 08:03:02.223596 | orchestrator | skipping: [testbed-node-5] 2025-09-23 08:03:02.223608 | orchestrator | skipping: [testbed-node-4] 2025-09-23 08:03:02.223621 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.223633 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:03:02.223645 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.223657 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.223667 | orchestrator | 2025-09-23 08:03:02.223678 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-09-23 08:03:02.223689 | orchestrator | Tuesday 23 September 2025 08:02:23 +0000 (0:00:08.717) 0:07:43.617 ***** 2025-09-23 08:03:02.223700 | orchestrator | skipping: [testbed-node-5] 2025-09-23 08:03:02.223712 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.223725 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:03:02.223737 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.223750 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.223779 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2025-09-23 08:03:02.223793 | orchestrator | 2025-09-23 08:03:02.223810 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-23 08:03:02.223818 | orchestrator | Tuesday 23 September 2025 08:02:27 +0000 (0:00:03.755) 0:07:47.372 ***** 2025-09-23 08:03:02.223826 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-09-23 08:03:02.223834 | orchestrator | 2025-09-23 08:03:02.223842 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-23 08:03:02.223850 | orchestrator | Tuesday 23 September 2025 08:02:39 +0000 (0:00:11.991) 0:07:59.363 ***** 2025-09-23 08:03:02.223858 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-09-23 08:03:02.223866 | orchestrator | 2025-09-23 08:03:02.223873 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-09-23 08:03:02.223881 | orchestrator | Tuesday 23 September 2025 08:02:40 +0000 (0:00:01.290) 0:08:00.654 ***** 2025-09-23 08:03:02.223889 | orchestrator | skipping: [testbed-node-4] 2025-09-23 08:03:02.223897 | orchestrator | 2025-09-23 08:03:02.223905 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-09-23 08:03:02.223912 | orchestrator | Tuesday 23 September 2025 08:02:41 +0000 (0:00:01.305) 0:08:01.960 ***** 2025-09-23 08:03:02.223920 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-09-23 08:03:02.223928 | orchestrator | 2025-09-23 08:03:02.223943 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-09-23 08:03:02.223951 | orchestrator | Tuesday 23 September 2025 08:02:52 +0000 (0:00:10.321) 0:08:12.281 ***** 2025-09-23 08:03:02.223959 | orchestrator | ok: [testbed-node-3] 2025-09-23 08:03:02.223968 | orchestrator | ok: [testbed-node-4] 2025-09-23 08:03:02.223975 | orchestrator | ok: [testbed-node-5] 2025-09-23 08:03:02.223983 | orchestrator | ok: [testbed-node-0] 2025-09-23 08:03:02.223991 | orchestrator | ok: [testbed-node-1] 2025-09-23 08:03:02.223999 | orchestrator | ok: [testbed-node-2] 2025-09-23 08:03:02.224006 | orchestrator | 2025-09-23 08:03:02.224014 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-09-23 08:03:02.224022 | orchestrator | 2025-09-23 08:03:02.224029 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-09-23 08:03:02.224037 | orchestrator | Tuesday 23 September 2025 08:02:54 +0000 (0:00:01.888) 0:08:14.170 ***** 2025-09-23 08:03:02.224045 | orchestrator | changed: [testbed-node-0] 2025-09-23 08:03:02.224053 | orchestrator | changed: [testbed-node-1] 2025-09-23 08:03:02.224061 | orchestrator | changed: [testbed-node-2] 2025-09-23 08:03:02.224068 | orchestrator | 2025-09-23 08:03:02.224081 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-09-23 08:03:02.224089 | orchestrator | 2025-09-23 08:03:02.224097 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-09-23 08:03:02.224105 | orchestrator | Tuesday 23 September 2025 08:02:55 +0000 (0:00:01.154) 0:08:15.324 ***** 2025-09-23 08:03:02.224119 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.224127 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.224135 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.224143 | orchestrator | 2025-09-23 08:03:02.224150 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-09-23 08:03:02.224158 | orchestrator | 2025-09-23 08:03:02.224166 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-09-23 08:03:02.224174 | orchestrator | Tuesday 23 September 2025 08:02:55 +0000 (0:00:00.508) 0:08:15.833 ***** 2025-09-23 08:03:02.224181 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-09-23 08:03:02.224189 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-23 08:03:02.224197 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-23 08:03:02.224205 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-09-23 08:03:02.224257 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-09-23 08:03:02.224265 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-09-23 08:03:02.224273 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:03:02.224280 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-09-23 08:03:02.224288 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-23 08:03:02.224296 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-23 08:03:02.224304 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-09-23 08:03:02.224312 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-09-23 08:03:02.224319 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-09-23 08:03:02.224327 | orchestrator | skipping: [testbed-node-4] 2025-09-23 08:03:02.224335 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-09-23 08:03:02.224343 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-23 08:03:02.224351 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-23 08:03:02.224359 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-09-23 08:03:02.224366 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-09-23 08:03:02.224374 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-09-23 08:03:02.224382 | orchestrator | skipping: [testbed-node-5] 2025-09-23 08:03:02.224390 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-09-23 08:03:02.224398 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-23 08:03:02.224406 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-23 08:03:02.224413 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-09-23 08:03:02.224421 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-09-23 08:03:02.224429 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-09-23 08:03:02.224437 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.224444 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-09-23 08:03:02.224452 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-23 08:03:02.224460 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-23 08:03:02.224468 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-09-23 08:03:02.224475 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-09-23 08:03:02.224483 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-09-23 08:03:02.224491 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.224499 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-09-23 08:03:02.224507 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-23 08:03:02.224514 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-23 08:03:02.224564 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-09-23 08:03:02.224573 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-09-23 08:03:02.224580 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-09-23 08:03:02.224588 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.224596 | orchestrator | 2025-09-23 08:03:02.224604 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-09-23 08:03:02.224612 | orchestrator | 2025-09-23 08:03:02.224620 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-09-23 08:03:02.224633 | orchestrator | Tuesday 23 September 2025 08:02:57 +0000 (0:00:01.380) 0:08:17.213 ***** 2025-09-23 08:03:02.224641 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-09-23 08:03:02.224649 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-09-23 08:03:02.224657 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.224665 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-09-23 08:03:02.224672 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-09-23 08:03:02.224680 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.224688 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-09-23 08:03:02.224695 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-09-23 08:03:02.224703 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.224711 | orchestrator | 2025-09-23 08:03:02.224718 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-09-23 08:03:02.224726 | orchestrator | 2025-09-23 08:03:02.224734 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-09-23 08:03:02.224746 | orchestrator | Tuesday 23 September 2025 08:02:57 +0000 (0:00:00.735) 0:08:17.948 ***** 2025-09-23 08:03:02.224754 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.224762 | orchestrator | 2025-09-23 08:03:02.224770 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-09-23 08:03:02.224778 | orchestrator | 2025-09-23 08:03:02.224785 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-09-23 08:03:02.224792 | orchestrator | Tuesday 23 September 2025 08:02:58 +0000 (0:00:00.712) 0:08:18.661 ***** 2025-09-23 08:03:02.224799 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:03:02.224806 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:03:02.224812 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:03:02.224819 | orchestrator | 2025-09-23 08:03:02.224825 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 08:03:02.224832 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 08:03:02.224839 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-09-23 08:03:02.224846 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-23 08:03:02.224853 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-23 08:03:02.224859 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-23 08:03:02.224866 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-09-23 08:03:02.224872 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-09-23 08:03:02.224879 | orchestrator | 2025-09-23 08:03:02.224886 | orchestrator | 2025-09-23 08:03:02.224896 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 08:03:02.224903 | orchestrator | Tuesday 23 September 2025 08:02:59 +0000 (0:00:00.416) 0:08:19.078 ***** 2025-09-23 08:03:02.224910 | orchestrator | =============================================================================== 2025-09-23 08:03:02.224916 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 35.27s 2025-09-23 08:03:02.224923 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 32.65s 2025-09-23 08:03:02.224929 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 26.42s 2025-09-23 08:03:02.224936 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 24.20s 2025-09-23 08:03:02.224942 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.16s 2025-09-23 08:03:02.224949 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.71s 2025-09-23 08:03:02.224955 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 19.89s 2025-09-23 08:03:02.224962 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.33s 2025-09-23 08:03:02.224968 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 13.91s 2025-09-23 08:03:02.224975 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.99s 2025-09-23 08:03:02.224981 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.81s 2025-09-23 08:03:02.224988 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 11.81s 2025-09-23 08:03:02.224994 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.75s 2025-09-23 08:03:02.225001 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.37s 2025-09-23 08:03:02.225007 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.32s 2025-09-23 08:03:02.225013 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.72s 2025-09-23 08:03:02.225020 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.60s 2025-09-23 08:03:02.225026 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.92s 2025-09-23 08:03:02.225036 | orchestrator | nova-cell : Restart nova-conductor container ---------------------------- 7.40s 2025-09-23 08:03:02.225043 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 7.25s 2025-09-23 08:03:05.250953 | orchestrator | 2025-09-23 08:03:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-23 08:03:08.296647 | orchestrator | 2025-09-23 08:03:08 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-23 08:03:11.344129 | orchestrator | 2025-09-23 08:03:11 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-23 08:03:14.383731 | orchestrator | 2025-09-23 08:03:14 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-23 08:03:17.424524 | orchestrator | 2025-09-23 08:03:17 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-23 08:03:20.468081 | orchestrator | 2025-09-23 08:03:20 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-23 08:03:23.512802 | orchestrator | 2025-09-23 08:03:23 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-23 08:03:26.560074 | orchestrator | 2025-09-23 08:03:26 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-23 08:03:29.602694 | orchestrator | 2025-09-23 08:03:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-23 08:03:32.644670 | orchestrator | 2025-09-23 08:03:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-23 08:03:35.685857 | orchestrator | 2025-09-23 08:03:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-23 08:03:38.728946 | orchestrator | 2025-09-23 08:03:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-23 08:03:41.773262 | orchestrator | 2025-09-23 08:03:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-23 08:03:44.813315 | orchestrator | 2025-09-23 08:03:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-23 08:03:47.849445 | orchestrator | 2025-09-23 08:03:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-23 08:03:50.889922 | orchestrator | 2025-09-23 08:03:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-23 08:03:53.929876 | orchestrator | 2025-09-23 08:03:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-23 08:03:56.966704 | orchestrator | 2025-09-23 08:03:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-23 08:04:00.018709 | orchestrator | 2025-09-23 08:04:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-23 08:04:03.059030 | orchestrator | 2025-09-23 08:04:03.347431 | orchestrator | 2025-09-23 08:04:03.352348 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Tue Sep 23 08:04:03 UTC 2025 2025-09-23 08:04:03.352437 | orchestrator | 2025-09-23 08:04:03.732887 | orchestrator | ok: Runtime: 0:34:51.824347 2025-09-23 08:04:03.985667 | 2025-09-23 08:04:03.985820 | TASK [Bootstrap services] 2025-09-23 08:04:04.777431 | orchestrator | 2025-09-23 08:04:04.777623 | orchestrator | # BOOTSTRAP 2025-09-23 08:04:04.777648 | orchestrator | 2025-09-23 08:04:04.777663 | orchestrator | + set -e 2025-09-23 08:04:04.777676 | orchestrator | + echo 2025-09-23 08:04:04.777690 | orchestrator | + echo '# BOOTSTRAP' 2025-09-23 08:04:04.777707 | orchestrator | + echo 2025-09-23 08:04:04.777752 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-09-23 08:04:04.788289 | orchestrator | + set -e 2025-09-23 08:04:04.788368 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-09-23 08:04:09.322737 | orchestrator | 2025-09-23 08:04:09 | INFO  | It takes a moment until task 33f2d778-56b5-44f8-9a1f-31c92f056bc3 (flavor-manager) has been started and output is visible here. 2025-09-23 08:04:17.642527 | orchestrator | 2025-09-23 08:04:12 | INFO  | Flavor SCS-1L-1 created 2025-09-23 08:04:17.642806 | orchestrator | 2025-09-23 08:04:13 | INFO  | Flavor SCS-1L-1-5 created 2025-09-23 08:04:17.642833 | orchestrator | 2025-09-23 08:04:13 | INFO  | Flavor SCS-1V-2 created 2025-09-23 08:04:17.642849 | orchestrator | 2025-09-23 08:04:13 | INFO  | Flavor SCS-1V-2-5 created 2025-09-23 08:04:17.642869 | orchestrator | 2025-09-23 08:04:13 | INFO  | Flavor SCS-1V-4 created 2025-09-23 08:04:17.642881 | orchestrator | 2025-09-23 08:04:13 | INFO  | Flavor SCS-1V-4-10 created 2025-09-23 08:04:17.642893 | orchestrator | 2025-09-23 08:04:14 | INFO  | Flavor SCS-1V-8 created 2025-09-23 08:04:17.642905 | orchestrator | 2025-09-23 08:04:14 | INFO  | Flavor SCS-1V-8-20 created 2025-09-23 08:04:17.642931 | orchestrator | 2025-09-23 08:04:14 | INFO  | Flavor SCS-2V-4 created 2025-09-23 08:04:17.642944 | orchestrator | 2025-09-23 08:04:14 | INFO  | Flavor SCS-2V-4-10 created 2025-09-23 08:04:17.642955 | orchestrator | 2025-09-23 08:04:14 | INFO  | Flavor SCS-2V-8 created 2025-09-23 08:04:17.642967 | orchestrator | 2025-09-23 08:04:14 | INFO  | Flavor SCS-2V-8-20 created 2025-09-23 08:04:17.642978 | orchestrator | 2025-09-23 08:04:14 | INFO  | Flavor SCS-2V-16 created 2025-09-23 08:04:17.642989 | orchestrator | 2025-09-23 08:04:15 | INFO  | Flavor SCS-2V-16-50 created 2025-09-23 08:04:17.643000 | orchestrator | 2025-09-23 08:04:15 | INFO  | Flavor SCS-4V-8 created 2025-09-23 08:04:17.643012 | orchestrator | 2025-09-23 08:04:15 | INFO  | Flavor SCS-4V-8-20 created 2025-09-23 08:04:17.643023 | orchestrator | 2025-09-23 08:04:15 | INFO  | Flavor SCS-4V-16 created 2025-09-23 08:04:17.643034 | orchestrator | 2025-09-23 08:04:15 | INFO  | Flavor SCS-4V-16-50 created 2025-09-23 08:04:17.643045 | orchestrator | 2025-09-23 08:04:15 | INFO  | Flavor SCS-4V-32 created 2025-09-23 08:04:17.643056 | orchestrator | 2025-09-23 08:04:15 | INFO  | Flavor SCS-4V-32-100 created 2025-09-23 08:04:17.643067 | orchestrator | 2025-09-23 08:04:16 | INFO  | Flavor SCS-8V-16 created 2025-09-23 08:04:17.643079 | orchestrator | 2025-09-23 08:04:16 | INFO  | Flavor SCS-8V-16-50 created 2025-09-23 08:04:17.643090 | orchestrator | 2025-09-23 08:04:16 | INFO  | Flavor SCS-8V-32 created 2025-09-23 08:04:17.643102 | orchestrator | 2025-09-23 08:04:16 | INFO  | Flavor SCS-8V-32-100 created 2025-09-23 08:04:17.643113 | orchestrator | 2025-09-23 08:04:16 | INFO  | Flavor SCS-16V-32 created 2025-09-23 08:04:17.643124 | orchestrator | 2025-09-23 08:04:16 | INFO  | Flavor SCS-16V-32-100 created 2025-09-23 08:04:17.643174 | orchestrator | 2025-09-23 08:04:17 | INFO  | Flavor SCS-2V-4-20s created 2025-09-23 08:04:17.643186 | orchestrator | 2025-09-23 08:04:17 | INFO  | Flavor SCS-4V-8-50s created 2025-09-23 08:04:17.643197 | orchestrator | 2025-09-23 08:04:17 | INFO  | Flavor SCS-8V-32-100s created 2025-09-23 08:04:19.854968 | orchestrator | 2025-09-23 08:04:19 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-09-23 08:04:30.012904 | orchestrator | 2025-09-23 08:04:30 | INFO  | Task d074cdc8-5194-429a-b7ac-02643f6acb63 (bootstrap-basic) was prepared for execution. 2025-09-23 08:04:30.013035 | orchestrator | 2025-09-23 08:04:30 | INFO  | It takes a moment until task d074cdc8-5194-429a-b7ac-02643f6acb63 (bootstrap-basic) has been started and output is visible here. 2025-09-23 08:05:32.087762 | orchestrator | 2025-09-23 08:05:32.087866 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-09-23 08:05:32.087882 | orchestrator | 2025-09-23 08:05:32.087895 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-23 08:05:32.087907 | orchestrator | Tuesday 23 September 2025 08:04:34 +0000 (0:00:00.133) 0:00:00.133 ***** 2025-09-23 08:05:32.087919 | orchestrator | ok: [localhost] 2025-09-23 08:05:32.087932 | orchestrator | 2025-09-23 08:05:32.087944 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-09-23 08:05:32.087956 | orchestrator | Tuesday 23 September 2025 08:04:36 +0000 (0:00:01.874) 0:00:02.007 ***** 2025-09-23 08:05:32.087967 | orchestrator | ok: [localhost] 2025-09-23 08:05:32.087979 | orchestrator | 2025-09-23 08:05:32.087991 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-09-23 08:05:32.088002 | orchestrator | Tuesday 23 September 2025 08:04:44 +0000 (0:00:08.106) 0:00:10.113 ***** 2025-09-23 08:05:32.088014 | orchestrator | changed: [localhost] 2025-09-23 08:05:32.088025 | orchestrator | 2025-09-23 08:05:32.088037 | orchestrator | TASK [Get volume type local] *************************************************** 2025-09-23 08:05:32.088049 | orchestrator | Tuesday 23 September 2025 08:04:51 +0000 (0:00:07.685) 0:00:17.799 ***** 2025-09-23 08:05:32.088060 | orchestrator | ok: [localhost] 2025-09-23 08:05:32.088072 | orchestrator | 2025-09-23 08:05:32.088083 | orchestrator | TASK [Create volume type local] ************************************************ 2025-09-23 08:05:32.088095 | orchestrator | Tuesday 23 September 2025 08:04:58 +0000 (0:00:06.961) 0:00:24.760 ***** 2025-09-23 08:05:32.088112 | orchestrator | changed: [localhost] 2025-09-23 08:05:32.088124 | orchestrator | 2025-09-23 08:05:32.088136 | orchestrator | TASK [Create public network] *************************************************** 2025-09-23 08:05:32.088147 | orchestrator | Tuesday 23 September 2025 08:05:05 +0000 (0:00:06.850) 0:00:31.611 ***** 2025-09-23 08:05:32.088158 | orchestrator | changed: [localhost] 2025-09-23 08:05:32.088170 | orchestrator | 2025-09-23 08:05:32.088181 | orchestrator | TASK [Set public network to default] ******************************************* 2025-09-23 08:05:32.088192 | orchestrator | Tuesday 23 September 2025 08:05:12 +0000 (0:00:06.922) 0:00:38.534 ***** 2025-09-23 08:05:32.088204 | orchestrator | changed: [localhost] 2025-09-23 08:05:32.088215 | orchestrator | 2025-09-23 08:05:32.088227 | orchestrator | TASK [Create public subnet] **************************************************** 2025-09-23 08:05:32.088248 | orchestrator | Tuesday 23 September 2025 08:05:19 +0000 (0:00:06.680) 0:00:45.215 ***** 2025-09-23 08:05:32.088260 | orchestrator | changed: [localhost] 2025-09-23 08:05:32.088272 | orchestrator | 2025-09-23 08:05:32.088283 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-09-23 08:05:32.088295 | orchestrator | Tuesday 23 September 2025 08:05:23 +0000 (0:00:04.633) 0:00:49.848 ***** 2025-09-23 08:05:32.088306 | orchestrator | changed: [localhost] 2025-09-23 08:05:32.088318 | orchestrator | 2025-09-23 08:05:32.088329 | orchestrator | TASK [Create manager role] ***************************************************** 2025-09-23 08:05:32.088341 | orchestrator | Tuesday 23 September 2025 08:05:28 +0000 (0:00:04.452) 0:00:54.301 ***** 2025-09-23 08:05:32.088352 | orchestrator | ok: [localhost] 2025-09-23 08:05:32.088364 | orchestrator | 2025-09-23 08:05:32.088375 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 08:05:32.088387 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 08:05:32.088435 | orchestrator | 2025-09-23 08:05:32.088448 | orchestrator | 2025-09-23 08:05:32.088460 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 08:05:32.088497 | orchestrator | Tuesday 23 September 2025 08:05:31 +0000 (0:00:03.504) 0:00:57.806 ***** 2025-09-23 08:05:32.088508 | orchestrator | =============================================================================== 2025-09-23 08:05:32.088520 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.11s 2025-09-23 08:05:32.088531 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.69s 2025-09-23 08:05:32.088542 | orchestrator | Get volume type local --------------------------------------------------- 6.96s 2025-09-23 08:05:32.088554 | orchestrator | Create public network --------------------------------------------------- 6.92s 2025-09-23 08:05:32.088565 | orchestrator | Create volume type local ------------------------------------------------ 6.85s 2025-09-23 08:05:32.088576 | orchestrator | Set public network to default ------------------------------------------- 6.68s 2025-09-23 08:05:32.088587 | orchestrator | Create public subnet ---------------------------------------------------- 4.63s 2025-09-23 08:05:32.088598 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.45s 2025-09-23 08:05:32.088610 | orchestrator | Create manager role ----------------------------------------------------- 3.50s 2025-09-23 08:05:32.088621 | orchestrator | Gathering Facts --------------------------------------------------------- 1.87s 2025-09-23 08:05:34.417560 | orchestrator | 2025-09-23 08:05:34 | INFO  | It takes a moment until task fa811b08-d0f7-4cdb-9396-5237c5020ac5 (image-manager) has been started and output is visible here. 2025-09-23 08:06:17.246269 | orchestrator | 2025-09-23 08:05:37 | INFO  | Processing image 'Cirros 0.6.2' 2025-09-23 08:06:17.246380 | orchestrator | 2025-09-23 08:05:37 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-09-23 08:06:17.246400 | orchestrator | 2025-09-23 08:05:37 | INFO  | Importing image Cirros 0.6.2 2025-09-23 08:06:17.246413 | orchestrator | 2025-09-23 08:05:37 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-09-23 08:06:17.246425 | orchestrator | 2025-09-23 08:05:39 | INFO  | Waiting for image to leave queued state... 2025-09-23 08:06:17.246438 | orchestrator | 2025-09-23 08:05:43 | INFO  | Waiting for import to complete... 2025-09-23 08:06:17.246449 | orchestrator | 2025-09-23 08:05:53 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-09-23 08:06:17.246475 | orchestrator | 2025-09-23 08:05:53 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-09-23 08:06:17.246487 | orchestrator | 2025-09-23 08:05:53 | INFO  | Setting internal_version = 0.6.2 2025-09-23 08:06:17.246498 | orchestrator | 2025-09-23 08:05:53 | INFO  | Setting image_original_user = cirros 2025-09-23 08:06:17.246510 | orchestrator | 2025-09-23 08:05:53 | INFO  | Adding tag os:cirros 2025-09-23 08:06:17.246521 | orchestrator | 2025-09-23 08:05:53 | INFO  | Setting property architecture: x86_64 2025-09-23 08:06:17.246533 | orchestrator | 2025-09-23 08:05:54 | INFO  | Setting property hw_disk_bus: scsi 2025-09-23 08:06:17.246544 | orchestrator | 2025-09-23 08:05:54 | INFO  | Setting property hw_rng_model: virtio 2025-09-23 08:06:17.246617 | orchestrator | 2025-09-23 08:05:54 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-09-23 08:06:17.246629 | orchestrator | 2025-09-23 08:05:54 | INFO  | Setting property hw_watchdog_action: reset 2025-09-23 08:06:17.246641 | orchestrator | 2025-09-23 08:05:55 | INFO  | Setting property hypervisor_type: qemu 2025-09-23 08:06:17.246653 | orchestrator | 2025-09-23 08:05:55 | INFO  | Setting property os_distro: cirros 2025-09-23 08:06:17.246664 | orchestrator | 2025-09-23 08:05:55 | INFO  | Setting property os_purpose: minimal 2025-09-23 08:06:17.246676 | orchestrator | 2025-09-23 08:05:55 | INFO  | Setting property replace_frequency: never 2025-09-23 08:06:17.246709 | orchestrator | 2025-09-23 08:05:56 | INFO  | Setting property uuid_validity: none 2025-09-23 08:06:17.246720 | orchestrator | 2025-09-23 08:05:56 | INFO  | Setting property provided_until: none 2025-09-23 08:06:17.246740 | orchestrator | 2025-09-23 08:05:56 | INFO  | Setting property image_description: Cirros 2025-09-23 08:06:17.246757 | orchestrator | 2025-09-23 08:05:56 | INFO  | Setting property image_name: Cirros 2025-09-23 08:06:17.246768 | orchestrator | 2025-09-23 08:05:56 | INFO  | Setting property internal_version: 0.6.2 2025-09-23 08:06:17.246779 | orchestrator | 2025-09-23 08:05:57 | INFO  | Setting property image_original_user: cirros 2025-09-23 08:06:17.246794 | orchestrator | 2025-09-23 08:05:57 | INFO  | Setting property os_version: 0.6.2 2025-09-23 08:06:17.246807 | orchestrator | 2025-09-23 08:05:57 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-09-23 08:06:17.246821 | orchestrator | 2025-09-23 08:05:57 | INFO  | Setting property image_build_date: 2023-05-30 2025-09-23 08:06:17.246833 | orchestrator | 2025-09-23 08:05:58 | INFO  | Checking status of 'Cirros 0.6.2' 2025-09-23 08:06:17.246846 | orchestrator | 2025-09-23 08:05:58 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-09-23 08:06:17.246859 | orchestrator | 2025-09-23 08:05:58 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-09-23 08:06:17.246871 | orchestrator | 2025-09-23 08:05:58 | INFO  | Processing image 'Cirros 0.6.3' 2025-09-23 08:06:17.246884 | orchestrator | 2025-09-23 08:05:58 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-09-23 08:06:17.246897 | orchestrator | 2025-09-23 08:05:58 | INFO  | Importing image Cirros 0.6.3 2025-09-23 08:06:17.246910 | orchestrator | 2025-09-23 08:05:58 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-09-23 08:06:17.246922 | orchestrator | 2025-09-23 08:05:59 | INFO  | Waiting for image to leave queued state... 2025-09-23 08:06:17.246935 | orchestrator | 2025-09-23 08:06:01 | INFO  | Waiting for import to complete... 2025-09-23 08:06:17.246975 | orchestrator | 2025-09-23 08:06:12 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-09-23 08:06:17.246990 | orchestrator | 2025-09-23 08:06:12 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-09-23 08:06:17.247003 | orchestrator | 2025-09-23 08:06:12 | INFO  | Setting internal_version = 0.6.3 2025-09-23 08:06:17.247016 | orchestrator | 2025-09-23 08:06:12 | INFO  | Setting image_original_user = cirros 2025-09-23 08:06:17.247030 | orchestrator | 2025-09-23 08:06:12 | INFO  | Adding tag os:cirros 2025-09-23 08:06:17.247043 | orchestrator | 2025-09-23 08:06:12 | INFO  | Setting property architecture: x86_64 2025-09-23 08:06:17.247055 | orchestrator | 2025-09-23 08:06:12 | INFO  | Setting property hw_disk_bus: scsi 2025-09-23 08:06:17.247068 | orchestrator | 2025-09-23 08:06:12 | INFO  | Setting property hw_rng_model: virtio 2025-09-23 08:06:17.247080 | orchestrator | 2025-09-23 08:06:13 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-09-23 08:06:17.247094 | orchestrator | 2025-09-23 08:06:13 | INFO  | Setting property hw_watchdog_action: reset 2025-09-23 08:06:17.247106 | orchestrator | 2025-09-23 08:06:13 | INFO  | Setting property hypervisor_type: qemu 2025-09-23 08:06:17.247119 | orchestrator | 2025-09-23 08:06:13 | INFO  | Setting property os_distro: cirros 2025-09-23 08:06:17.247140 | orchestrator | 2025-09-23 08:06:14 | INFO  | Setting property os_purpose: minimal 2025-09-23 08:06:17.247153 | orchestrator | 2025-09-23 08:06:14 | INFO  | Setting property replace_frequency: never 2025-09-23 08:06:17.247166 | orchestrator | 2025-09-23 08:06:14 | INFO  | Setting property uuid_validity: none 2025-09-23 08:06:17.247179 | orchestrator | 2025-09-23 08:06:14 | INFO  | Setting property provided_until: none 2025-09-23 08:06:17.247191 | orchestrator | 2025-09-23 08:06:14 | INFO  | Setting property image_description: Cirros 2025-09-23 08:06:17.247203 | orchestrator | 2025-09-23 08:06:15 | INFO  | Setting property image_name: Cirros 2025-09-23 08:06:17.247214 | orchestrator | 2025-09-23 08:06:15 | INFO  | Setting property internal_version: 0.6.3 2025-09-23 08:06:17.247225 | orchestrator | 2025-09-23 08:06:15 | INFO  | Setting property image_original_user: cirros 2025-09-23 08:06:17.247236 | orchestrator | 2025-09-23 08:06:15 | INFO  | Setting property os_version: 0.6.3 2025-09-23 08:06:17.247247 | orchestrator | 2025-09-23 08:06:15 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-09-23 08:06:17.247258 | orchestrator | 2025-09-23 08:06:16 | INFO  | Setting property image_build_date: 2024-09-26 2025-09-23 08:06:17.247274 | orchestrator | 2025-09-23 08:06:16 | INFO  | Checking status of 'Cirros 0.6.3' 2025-09-23 08:06:17.247286 | orchestrator | 2025-09-23 08:06:16 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-09-23 08:06:17.247297 | orchestrator | 2025-09-23 08:06:16 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-09-23 08:06:17.582928 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-09-23 08:06:19.809260 | orchestrator | 2025-09-23 08:06:19 | INFO  | date: 2025-09-23 2025-09-23 08:06:19.809338 | orchestrator | 2025-09-23 08:06:19 | INFO  | image: octavia-amphora-haproxy-2024.2.20250923.qcow2 2025-09-23 08:06:19.809350 | orchestrator | 2025-09-23 08:06:19 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250923.qcow2 2025-09-23 08:06:19.809376 | orchestrator | 2025-09-23 08:06:19 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250923.qcow2.CHECKSUM 2025-09-23 08:06:19.842285 | orchestrator | 2025-09-23 08:06:19 | INFO  | checksum: 2854a798881734eae43577e73433aa17c901907369c765882c2519a7d149d716 2025-09-23 08:06:19.916069 | orchestrator | 2025-09-23 08:06:19 | INFO  | It takes a moment until task 374cb254-1094-4265-be8c-77541c8e17ec (image-manager) has been started and output is visible here. 2025-09-23 08:07:21.539282 | orchestrator | 2025-09-23 08:06:22 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-09-23' 2025-09-23 08:07:21.539387 | orchestrator | 2025-09-23 08:06:22 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250923.qcow2: 200 2025-09-23 08:07:21.539406 | orchestrator | 2025-09-23 08:06:22 | INFO  | Importing image OpenStack Octavia Amphora 2025-09-23 2025-09-23 08:07:21.539417 | orchestrator | 2025-09-23 08:06:22 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250923.qcow2 2025-09-23 08:07:21.539429 | orchestrator | 2025-09-23 08:06:23 | INFO  | Waiting for image to leave queued state... 2025-09-23 08:07:21.539440 | orchestrator | 2025-09-23 08:06:25 | INFO  | Waiting for import to complete... 2025-09-23 08:07:21.539472 | orchestrator | 2025-09-23 08:06:35 | INFO  | Waiting for import to complete... 2025-09-23 08:07:21.539483 | orchestrator | 2025-09-23 08:06:45 | INFO  | Waiting for import to complete... 2025-09-23 08:07:21.539492 | orchestrator | 2025-09-23 08:06:55 | INFO  | Waiting for import to complete... 2025-09-23 08:07:21.539502 | orchestrator | 2025-09-23 08:07:06 | INFO  | Waiting for import to complete... 2025-09-23 08:07:21.539512 | orchestrator | 2025-09-23 08:07:16 | INFO  | Import of 'OpenStack Octavia Amphora 2025-09-23' successfully completed, reloading images 2025-09-23 08:07:21.539523 | orchestrator | 2025-09-23 08:07:16 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-09-23' 2025-09-23 08:07:21.539533 | orchestrator | 2025-09-23 08:07:16 | INFO  | Setting internal_version = 2025-09-23 2025-09-23 08:07:21.539542 | orchestrator | 2025-09-23 08:07:16 | INFO  | Setting image_original_user = ubuntu 2025-09-23 08:07:21.539553 | orchestrator | 2025-09-23 08:07:16 | INFO  | Adding tag amphora 2025-09-23 08:07:21.539563 | orchestrator | 2025-09-23 08:07:16 | INFO  | Adding tag os:ubuntu 2025-09-23 08:07:21.539572 | orchestrator | 2025-09-23 08:07:16 | INFO  | Setting property architecture: x86_64 2025-09-23 08:07:21.539582 | orchestrator | 2025-09-23 08:07:17 | INFO  | Setting property hw_disk_bus: scsi 2025-09-23 08:07:21.539592 | orchestrator | 2025-09-23 08:07:17 | INFO  | Setting property hw_rng_model: virtio 2025-09-23 08:07:21.539602 | orchestrator | 2025-09-23 08:07:17 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-09-23 08:07:21.539626 | orchestrator | 2025-09-23 08:07:17 | INFO  | Setting property hw_watchdog_action: reset 2025-09-23 08:07:21.539636 | orchestrator | 2025-09-23 08:07:18 | INFO  | Setting property hypervisor_type: qemu 2025-09-23 08:07:21.539646 | orchestrator | 2025-09-23 08:07:18 | INFO  | Setting property os_distro: ubuntu 2025-09-23 08:07:21.539656 | orchestrator | 2025-09-23 08:07:18 | INFO  | Setting property replace_frequency: quarterly 2025-09-23 08:07:21.539665 | orchestrator | 2025-09-23 08:07:18 | INFO  | Setting property uuid_validity: last-1 2025-09-23 08:07:21.539675 | orchestrator | 2025-09-23 08:07:18 | INFO  | Setting property provided_until: none 2025-09-23 08:07:21.539685 | orchestrator | 2025-09-23 08:07:19 | INFO  | Setting property os_purpose: network 2025-09-23 08:07:21.539695 | orchestrator | 2025-09-23 08:07:19 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-09-23 08:07:21.539705 | orchestrator | 2025-09-23 08:07:19 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-09-23 08:07:21.539715 | orchestrator | 2025-09-23 08:07:19 | INFO  | Setting property internal_version: 2025-09-23 2025-09-23 08:07:21.539724 | orchestrator | 2025-09-23 08:07:20 | INFO  | Setting property image_original_user: ubuntu 2025-09-23 08:07:21.539734 | orchestrator | 2025-09-23 08:07:20 | INFO  | Setting property os_version: 2025-09-23 2025-09-23 08:07:21.539745 | orchestrator | 2025-09-23 08:07:20 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250923.qcow2 2025-09-23 08:07:21.539755 | orchestrator | 2025-09-23 08:07:20 | INFO  | Setting property image_build_date: 2025-09-23 2025-09-23 08:07:21.539765 | orchestrator | 2025-09-23 08:07:21 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-09-23' 2025-09-23 08:07:21.539775 | orchestrator | 2025-09-23 08:07:21 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-09-23' 2025-09-23 08:07:21.539807 | orchestrator | 2025-09-23 08:07:21 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-09-23 08:07:21.539818 | orchestrator | 2025-09-23 08:07:21 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-09-23 08:07:21.539829 | orchestrator | 2025-09-23 08:07:21 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-09-23 08:07:21.539866 | orchestrator | 2025-09-23 08:07:21 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-09-23 08:07:22.152816 | orchestrator | ok: Runtime: 0:03:17.489396 2025-09-23 08:07:22.174824 | 2025-09-23 08:07:22.174984 | TASK [Run checks] 2025-09-23 08:07:22.876063 | orchestrator | + set -e 2025-09-23 08:07:22.876533 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-23 08:07:22.876565 | orchestrator | ++ export INTERACTIVE=false 2025-09-23 08:07:22.876589 | orchestrator | ++ INTERACTIVE=false 2025-09-23 08:07:22.876602 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-23 08:07:22.876615 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-23 08:07:22.876643 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-09-23 08:07:22.877951 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-09-23 08:07:22.884272 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-23 08:07:22.884386 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-23 08:07:22.884400 | orchestrator | + echo 2025-09-23 08:07:22.884419 | orchestrator | 2025-09-23 08:07:22.884431 | orchestrator | # CHECK 2025-09-23 08:07:22.884443 | orchestrator | 2025-09-23 08:07:22.884465 | orchestrator | + echo '# CHECK' 2025-09-23 08:07:22.884477 | orchestrator | + echo 2025-09-23 08:07:22.884492 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-23 08:07:22.885088 | orchestrator | ++ semver latest 5.0.0 2025-09-23 08:07:22.952112 | orchestrator | 2025-09-23 08:07:22.952213 | orchestrator | ## Containers @ testbed-manager 2025-09-23 08:07:22.952228 | orchestrator | 2025-09-23 08:07:22.952242 | orchestrator | + [[ -1 -eq -1 ]] 2025-09-23 08:07:22.952253 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-23 08:07:22.952265 | orchestrator | + echo 2025-09-23 08:07:22.952277 | orchestrator | + echo '## Containers @ testbed-manager' 2025-09-23 08:07:22.952289 | orchestrator | + echo 2025-09-23 08:07:22.952301 | orchestrator | + osism container testbed-manager ps 2025-09-23 08:07:25.444299 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-23 08:07:25.444393 | orchestrator | 154a342cacde registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_blackbox_exporter 2025-09-23 08:07:25.444410 | orchestrator | 1cc5eda5df73 registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_alertmanager 2025-09-23 08:07:25.444417 | orchestrator | 8fe6f7df8666 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2025-09-23 08:07:25.444429 | orchestrator | b14b73df5add registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 15 minutes prometheus_node_exporter 2025-09-23 08:07:25.444719 | orchestrator | 690b24c1b9db registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_server 2025-09-23 08:07:25.444737 | orchestrator | f5bdc51fd036 registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 18 minutes ago Up 18 minutes cephclient 2025-09-23 08:07:25.444745 | orchestrator | 9d52ac18c690 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-09-23 08:07:25.444752 | orchestrator | 8bb9bb1ac76f registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-09-23 08:07:25.444759 | orchestrator | 027ce75832e7 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-09-23 08:07:25.444788 | orchestrator | 27a2dd3f407c phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 31 minutes ago Up 31 minutes (healthy) 80/tcp phpmyadmin 2025-09-23 08:07:25.444796 | orchestrator | 9329ab3b2579 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 32 minutes ago Up 32 minutes openstackclient 2025-09-23 08:07:25.444803 | orchestrator | 3d0647a59d05 registry.osism.tech/osism/homer:v25.08.1 "/bin/sh /entrypoint…" 32 minutes ago Up 32 minutes (healthy) 8080/tcp homer 2025-09-23 08:07:25.444810 | orchestrator | 27c082422c03 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 55 minutes ago Up 54 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-09-23 08:07:25.444817 | orchestrator | 973695090a8c registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" 59 minutes ago Up 38 minutes (healthy) manager-inventory_reconciler-1 2025-09-23 08:07:25.444825 | orchestrator | d46279e95e99 registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 59 minutes ago Up 39 minutes (healthy) osism-kubernetes 2025-09-23 08:07:25.444832 | orchestrator | d7bd9e0e3604 registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" 59 minutes ago Up 39 minutes (healthy) osism-ansible 2025-09-23 08:07:25.444843 | orchestrator | 568731e02a22 registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" 59 minutes ago Up 39 minutes (healthy) ceph-ansible 2025-09-23 08:07:25.444872 | orchestrator | 6dd500816a70 registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" 59 minutes ago Up 39 minutes (healthy) kolla-ansible 2025-09-23 08:07:25.444879 | orchestrator | 3b2b777277e5 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 59 minutes ago Up 39 minutes (healthy) 8000/tcp manager-ara-server-1 2025-09-23 08:07:25.444886 | orchestrator | 11ced2c28b8b registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" 59 minutes ago Up 39 minutes (healthy) 3306/tcp manager-mariadb-1 2025-09-23 08:07:25.444902 | orchestrator | bc3fc6bea3ef registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" 59 minutes ago Up 39 minutes (healthy) osismclient 2025-09-23 08:07:25.444910 | orchestrator | 589437dabd67 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 59 minutes ago Up 39 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-09-23 08:07:25.444917 | orchestrator | 22dd8cf41c7c registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" 59 minutes ago Up 39 minutes (healthy) 6379/tcp manager-redis-1 2025-09-23 08:07:25.444930 | orchestrator | dde4e1bf94d5 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 59 minutes ago Up 39 minutes (healthy) manager-listener-1 2025-09-23 08:07:25.444937 | orchestrator | 47926adc7a10 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 59 minutes ago Up 39 minutes (healthy) manager-openstack-1 2025-09-23 08:07:25.444945 | orchestrator | b1d46230197e registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 59 minutes ago Up 39 minutes (healthy) manager-flower-1 2025-09-23 08:07:25.444952 | orchestrator | b42dde00ed25 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 59 minutes ago Up 39 minutes (healthy) manager-beat-1 2025-09-23 08:07:25.444959 | orchestrator | 5a00a93617bc registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" 59 minutes ago Up 39 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2025-09-23 08:07:25.444967 | orchestrator | c181da8e36c9 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-09-23 08:07:25.851888 | orchestrator | 2025-09-23 08:07:25.851992 | orchestrator | ## Images @ testbed-manager 2025-09-23 08:07:25.852009 | orchestrator | 2025-09-23 08:07:25.852021 | orchestrator | + echo 2025-09-23 08:07:25.852033 | orchestrator | + echo '## Images @ testbed-manager' 2025-09-23 08:07:25.852046 | orchestrator | + echo 2025-09-23 08:07:25.852057 | orchestrator | + osism container testbed-manager images 2025-09-23 08:07:28.073239 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-23 08:07:28.073354 | orchestrator | registry.osism.tech/osism/homer v25.08.1 fa7c5e3d4ccd 5 hours ago 11.5MB 2025-09-23 08:07:28.073373 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 e831299e099a 5 hours ago 243MB 2025-09-23 08:07:28.073386 | orchestrator | registry.osism.tech/osism/cephclient reef 2ac30dd76bd7 5 hours ago 453MB 2025-09-23 08:07:28.073397 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 3000ad24ff14 6 hours ago 631MB 2025-09-23 08:07:28.073426 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 72911f8746e1 6 hours ago 748MB 2025-09-23 08:07:28.073438 | orchestrator | registry.osism.tech/kolla/cron 2024.2 5a589725d8fe 6 hours ago 320MB 2025-09-23 08:07:28.073449 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 425cb06b4cef 6 hours ago 363MB 2025-09-23 08:07:28.073460 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 7f1b1f4a4ca8 6 hours ago 894MB 2025-09-23 08:07:28.073471 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 0ee914a8f229 6 hours ago 360MB 2025-09-23 08:07:28.073482 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 44a084c0fe4c 6 hours ago 412MB 2025-09-23 08:07:28.073493 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 0680a4fa6cb6 6 hours ago 459MB 2025-09-23 08:07:28.073504 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 1bdfa7450734 8 hours ago 590MB 2025-09-23 08:07:28.073515 | orchestrator | registry.osism.tech/osism/osism-ansible latest 6855a9081056 8 hours ago 594MB 2025-09-23 08:07:28.073526 | orchestrator | registry.osism.tech/osism/ceph-ansible reef e2591b952b3d 8 hours ago 543MB 2025-09-23 08:07:28.073555 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest 9e9442c33417 8 hours ago 1.22GB 2025-09-23 08:07:28.073567 | orchestrator | registry.osism.tech/osism/osism latest f46016a852e6 8 hours ago 325MB 2025-09-23 08:07:28.073578 | orchestrator | registry.osism.tech/osism/osism-frontend latest 4d4f18daeeee 8 hours ago 236MB 2025-09-23 08:07:28.073589 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest 766813fecfdb 8 hours ago 315MB 2025-09-23 08:07:28.073599 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 4 weeks ago 275MB 2025-09-23 08:07:28.073610 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.3 48f7ae354376 6 weeks ago 329MB 2025-09-23 08:07:28.073621 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 2 months ago 226MB 2025-09-23 08:07:28.073631 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.5-alpine f218e591b571 2 months ago 41.4MB 2025-09-23 08:07:28.073642 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 8 months ago 571MB 2025-09-23 08:07:28.073653 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 15 months ago 146MB 2025-09-23 08:07:28.273814 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-23 08:07:28.274814 | orchestrator | ++ semver latest 5.0.0 2025-09-23 08:07:28.321250 | orchestrator | 2025-09-23 08:07:28.321329 | orchestrator | ## Containers @ testbed-node-0 2025-09-23 08:07:28.321343 | orchestrator | 2025-09-23 08:07:28.321354 | orchestrator | + [[ -1 -eq -1 ]] 2025-09-23 08:07:28.321365 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-23 08:07:28.321376 | orchestrator | + echo 2025-09-23 08:07:28.321388 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-09-23 08:07:28.321400 | orchestrator | + echo 2025-09-23 08:07:28.321410 | orchestrator | + osism container testbed-node-0 ps 2025-09-23 08:07:30.560333 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-23 08:07:30.560445 | orchestrator | 340a1a929557 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-09-23 08:07:30.560463 | orchestrator | 2c37fe40f26b registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-09-23 08:07:30.560476 | orchestrator | ab726f7630b2 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-09-23 08:07:30.560487 | orchestrator | 02953d2d59b7 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-09-23 08:07:30.560498 | orchestrator | 939e1084a4b2 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2025-09-23 08:07:30.560510 | orchestrator | 10a887abeaef registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) glance_api 2025-09-23 08:07:30.560521 | orchestrator | 7482608aef4c registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_scheduler 2025-09-23 08:07:30.560532 | orchestrator | 13fde4bcac11 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-09-23 08:07:30.560559 | orchestrator | d6108610365f registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_conductor 2025-09-23 08:07:30.560591 | orchestrator | 97c547b4b800 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2025-09-23 08:07:30.560603 | orchestrator | 689c8c9af41b registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) placement_api 2025-09-23 08:07:30.560615 | orchestrator | 6180ba8105f3 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-09-23 08:07:30.560626 | orchestrator | c2bfed81b280 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_elasticsearch_exporter 2025-09-23 08:07:30.560637 | orchestrator | d0180dba3fd7 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_worker 2025-09-23 08:07:30.560649 | orchestrator | e78ef394137b registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2025-09-23 08:07:30.560660 | orchestrator | 6a6ddfc13c58 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2025-09-23 08:07:30.560671 | orchestrator | 3c66ad574a32 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2025-09-23 08:07:30.560682 | orchestrator | bbb88a1bd3f3 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2025-09-23 08:07:30.560706 | orchestrator | 07238af5c331 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-09-23 08:07:30.560717 | orchestrator | 0d6a37e0f510 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2025-09-23 08:07:30.561030 | orchestrator | 355ed0ea8430 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-09-23 08:07:30.561048 | orchestrator | 17cbd509e745 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_node_exporter 2025-09-23 08:07:30.561059 | orchestrator | 44260411e71b registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_backend_bind9 2025-09-23 08:07:30.561070 | orchestrator | 6438274c6a27 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_worker 2025-09-23 08:07:30.561082 | orchestrator | f8a8f3b2ebb4 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_keystone_listener 2025-09-23 08:07:30.561093 | orchestrator | 7489047ff2d1 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2025-09-23 08:07:30.561108 | orchestrator | 63880f2a78dc registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-0 2025-09-23 08:07:30.561119 | orchestrator | 89aa93bbd49f registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-09-23 08:07:30.561130 | orchestrator | 29fd15622338 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-09-23 08:07:30.561157 | orchestrator | 88f51af44c2e registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-09-23 08:07:30.561169 | orchestrator | 5dbde2c328f9 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) horizon 2025-09-23 08:07:30.561180 | orchestrator | 20fb0ebf4196 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-09-23 08:07:30.561191 | orchestrator | 37c88df6fb5a registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2025-09-23 08:07:30.561202 | orchestrator | 5747843d5476 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2025-09-23 08:07:30.561213 | orchestrator | eded3cecfe21 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-0 2025-09-23 08:07:30.561224 | orchestrator | bae823c9df10 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-09-23 08:07:30.561235 | orchestrator | 038097862f97 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-09-23 08:07:30.561247 | orchestrator | 355b09ad7352 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-09-23 08:07:30.561258 | orchestrator | 331ee228d6ad registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2025-09-23 08:07:30.561269 | orchestrator | 02c3b00fa792 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db 2025-09-23 08:07:30.561280 | orchestrator | 8fe8be78bf16 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_nb_db 2025-09-23 08:07:30.561291 | orchestrator | 1faf172a6303 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-09-23 08:07:30.561307 | orchestrator | 7858301a367a registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-0 2025-09-23 08:07:30.561335 | orchestrator | e99f6be67672 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) rabbitmq 2025-09-23 08:07:30.561347 | orchestrator | 9d47e543def0 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2025-09-23 08:07:30.561358 | orchestrator | 40cf29bcb201 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2025-09-23 08:07:30.561369 | orchestrator | 534aea134c82 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2025-09-23 08:07:30.561380 | orchestrator | e475120b93b7 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2025-09-23 08:07:30.561391 | orchestrator | 653d2a8e38f2 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2025-09-23 08:07:30.561409 | orchestrator | c951ff061c9d registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-09-23 08:07:30.561420 | orchestrator | 6850569161a0 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-09-23 08:07:30.561431 | orchestrator | 8f98a9a71072 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-09-23 08:07:30.828573 | orchestrator | 2025-09-23 08:07:30.828648 | orchestrator | ## Images @ testbed-node-0 2025-09-23 08:07:30.828661 | orchestrator | 2025-09-23 08:07:30.828670 | orchestrator | + echo 2025-09-23 08:07:30.828679 | orchestrator | + echo '## Images @ testbed-node-0' 2025-09-23 08:07:30.828689 | orchestrator | + echo 2025-09-23 08:07:30.828698 | orchestrator | + osism container testbed-node-0 images 2025-09-23 08:07:32.961535 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-23 08:07:32.961659 | orchestrator | registry.osism.tech/osism/ceph-daemon reef e9f5591a97f5 5 hours ago 1.27GB 2025-09-23 08:07:32.961676 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 3000ad24ff14 6 hours ago 631MB 2025-09-23 08:07:32.961688 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 1a661fc5b156 6 hours ago 328MB 2025-09-23 08:07:32.961700 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 3bc0ef19ac3e 6 hours ago 321MB 2025-09-23 08:07:32.961711 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 72911f8746e1 6 hours ago 748MB 2025-09-23 08:07:32.961725 | orchestrator | registry.osism.tech/kolla/cron 2024.2 5a589725d8fe 6 hours ago 320MB 2025-09-23 08:07:32.961737 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 6eaddc458de0 6 hours ago 331MB 2025-09-23 08:07:32.961795 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 851486d46497 6 hours ago 1.56GB 2025-09-23 08:07:32.961809 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 aab249247722 6 hours ago 1.59GB 2025-09-23 08:07:32.961820 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 33dbbd6bf609 6 hours ago 420MB 2025-09-23 08:07:32.961831 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 a20ee62d6667 6 hours ago 377MB 2025-09-23 08:07:32.961843 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 976addeedc66 6 hours ago 1.05GB 2025-09-23 08:07:32.961854 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 b0b706b00b76 6 hours ago 593MB 2025-09-23 08:07:32.961865 | orchestrator | registry.osism.tech/kolla/redis 2024.2 312ed77c00f3 6 hours ago 327MB 2025-09-23 08:07:32.961901 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 7827f537ed7d 6 hours ago 327MB 2025-09-23 08:07:32.961913 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 359830abf907 6 hours ago 347MB 2025-09-23 08:07:32.961924 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 098de4c9f434 6 hours ago 356MB 2025-09-23 08:07:32.961936 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 0ee914a8f229 6 hours ago 360MB 2025-09-23 08:07:32.961947 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 44a084c0fe4c 6 hours ago 412MB 2025-09-23 08:07:32.961958 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 d5f1f3b81c72 6 hours ago 353MB 2025-09-23 08:07:32.961969 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 53ccf00c829a 6 hours ago 364MB 2025-09-23 08:07:32.961999 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 77c772c3ac2a 6 hours ago 364MB 2025-09-23 08:07:32.962010 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 ab8c9ccc68cd 6 hours ago 1.21GB 2025-09-23 08:07:32.962096 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 9fe8b2ea3aeb 6 hours ago 949MB 2025-09-23 08:07:32.962109 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 7708bcea1962 6 hours ago 949MB 2025-09-23 08:07:32.962122 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 372f5a1d71bc 6 hours ago 949MB 2025-09-23 08:07:32.962136 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 ea18dcfcc0c2 6 hours ago 949MB 2025-09-23 08:07:32.962150 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 cf3c5c1a3449 6 hours ago 1.25GB 2025-09-23 08:07:32.962163 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 345152f7f511 6 hours ago 1.15GB 2025-09-23 08:07:32.962175 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 f05ceb67263a 6 hours ago 1.04GB 2025-09-23 08:07:32.962189 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 d646cf28629a 6 hours ago 1.04GB 2025-09-23 08:07:32.962211 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 0c603c269aa6 6 hours ago 1.16GB 2025-09-23 08:07:32.962224 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 35af7dc2a171 6 hours ago 1.11GB 2025-09-23 08:07:32.962237 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 534fc5014e30 6 hours ago 1.11GB 2025-09-23 08:07:32.962249 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 d5ba5d613d37 6 hours ago 1.12GB 2025-09-23 08:07:32.962262 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 d5c235596cda 6 hours ago 1.11GB 2025-09-23 08:07:32.962275 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 41df6920972f 6 hours ago 1.1GB 2025-09-23 08:07:32.962306 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 837165aa3c36 6 hours ago 1.12GB 2025-09-23 08:07:32.962320 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 a83133128003 6 hours ago 1.12GB 2025-09-23 08:07:32.962333 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 41fd463dd555 6 hours ago 1.1GB 2025-09-23 08:07:32.962346 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 08ebaf7372f5 6 hours ago 1.1GB 2025-09-23 08:07:32.962359 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 4fdee6a1d73a 6 hours ago 1.41GB 2025-09-23 08:07:32.962372 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 0022aa6919b2 6 hours ago 1.41GB 2025-09-23 08:07:32.962385 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 d7a9a5cc5987 6 hours ago 1.04GB 2025-09-23 08:07:32.962398 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 580332465c74 6 hours ago 1.04GB 2025-09-23 08:07:32.962410 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 13ce3c63e088 6 hours ago 1.04GB 2025-09-23 08:07:32.962423 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 c0df3ab7526b 6 hours ago 1.04GB 2025-09-23 08:07:32.962436 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 3908a81eda36 6 hours ago 1.31GB 2025-09-23 08:07:32.962449 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 9277e4aabe2a 6 hours ago 1.2GB 2025-09-23 08:07:32.962461 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 4f6b401ef119 6 hours ago 1.3GB 2025-09-23 08:07:32.962472 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 94ac82dcca17 6 hours ago 1.42GB 2025-09-23 08:07:32.962492 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 ec611874aa57 6 hours ago 1.3GB 2025-09-23 08:07:32.962503 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 236334a86b49 6 hours ago 1.3GB 2025-09-23 08:07:32.962514 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 300e63005cd6 6 hours ago 1.05GB 2025-09-23 08:07:32.962525 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 1c4a76e66928 6 hours ago 1.05GB 2025-09-23 08:07:32.962536 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 7ac86918728b 6 hours ago 1.06GB 2025-09-23 08:07:32.962547 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 122ea72670a1 6 hours ago 1.05GB 2025-09-23 08:07:32.962558 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 92fa72021016 6 hours ago 1.05GB 2025-09-23 08:07:32.962569 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 ce8035a0bea3 6 hours ago 1.06GB 2025-09-23 08:07:32.962580 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 edd0f2f4839d 6 hours ago 1.06GB 2025-09-23 08:07:32.962595 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 037074b5959f 6 hours ago 1.06GB 2025-09-23 08:07:32.962607 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 cb85568c5d26 6 hours ago 1.06GB 2025-09-23 08:07:32.962618 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 b99e48c2bc9a 6 hours ago 1.04GB 2025-09-23 08:07:33.314328 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-23 08:07:33.314670 | orchestrator | ++ semver latest 5.0.0 2025-09-23 08:07:33.378262 | orchestrator | 2025-09-23 08:07:33.378335 | orchestrator | ## Containers @ testbed-node-1 2025-09-23 08:07:33.378344 | orchestrator | 2025-09-23 08:07:33.378351 | orchestrator | + [[ -1 -eq -1 ]] 2025-09-23 08:07:33.378357 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-23 08:07:33.378367 | orchestrator | + echo 2025-09-23 08:07:33.378378 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-09-23 08:07:33.378389 | orchestrator | + echo 2025-09-23 08:07:33.378399 | orchestrator | + osism container testbed-node-1 ps 2025-09-23 08:07:35.807793 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-23 08:07:35.807864 | orchestrator | 7f5219587e17 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-09-23 08:07:35.807873 | orchestrator | d493464831d0 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-09-23 08:07:35.807879 | orchestrator | 6ae4a5c5a931 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-09-23 08:07:35.807913 | orchestrator | d453736ee9cc registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-09-23 08:07:35.807920 | orchestrator | fb432a53069d registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-09-23 08:07:35.807933 | orchestrator | d4eb93385844 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) glance_api 2025-09-23 08:07:35.807938 | orchestrator | c36e24f9e37d registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-09-23 08:07:35.807950 | orchestrator | 6c19279801fe registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-09-23 08:07:35.807955 | orchestrator | f83107098f5e registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_conductor 2025-09-23 08:07:35.807975 | orchestrator | e35e7a71042c registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2025-09-23 08:07:35.807981 | orchestrator | 432ef17baa70 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 13 minutes (healthy) placement_api 2025-09-23 08:07:35.807986 | orchestrator | 3a5d7c297eb5 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-09-23 08:07:35.807991 | orchestrator | 1a8a5d457376 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_elasticsearch_exporter 2025-09-23 08:07:35.807997 | orchestrator | 23fb567e0cb1 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_worker 2025-09-23 08:07:35.808002 | orchestrator | 86f56ceedc84 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2025-09-23 08:07:35.808013 | orchestrator | 1d6d7f1fd3be registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2025-09-23 08:07:35.808018 | orchestrator | ad6ad5aea9fb registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2025-09-23 08:07:35.808023 | orchestrator | 54b319f12361 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-09-23 08:07:35.808030 | orchestrator | 737f8a6e3452 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2025-09-23 08:07:35.808035 | orchestrator | 6306b06a4454 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2025-09-23 08:07:35.808040 | orchestrator | 32e01b8638e6 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-09-23 08:07:35.808057 | orchestrator | 248a49407d61 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_node_exporter 2025-09-23 08:07:35.808063 | orchestrator | 06cb64d9f629 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_backend_bind9 2025-09-23 08:07:35.808068 | orchestrator | 4d72ad3dd1ee registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_worker 2025-09-23 08:07:35.808073 | orchestrator | f4106ce96a23 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_keystone_listener 2025-09-23 08:07:35.808078 | orchestrator | d204c687e52e registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2025-09-23 08:07:35.808083 | orchestrator | 2ba18691ab19 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-1 2025-09-23 08:07:35.808088 | orchestrator | 1fd9d5e00489 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-09-23 08:07:35.808098 | orchestrator | 2b35ea3478f4 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-09-23 08:07:35.808103 | orchestrator | 62faee9ea808 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-09-23 08:07:35.808108 | orchestrator | 0adb59f0d49e registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-09-23 08:07:35.808113 | orchestrator | dcf0267e6bc4 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-09-23 08:07:35.808117 | orchestrator | 2af15d94d91d registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2025-09-23 08:07:35.808122 | orchestrator | 8446e6e8b5a8 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-09-23 08:07:35.808127 | orchestrator | 66289d2a1413 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-1 2025-09-23 08:07:35.808132 | orchestrator | 120c58357433 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-09-23 08:07:35.808137 | orchestrator | 93fbe8a9dca4 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-09-23 08:07:35.808145 | orchestrator | 986c99ad8d51 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-09-23 08:07:35.808150 | orchestrator | 45f06b594d5b registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2025-09-23 08:07:35.808155 | orchestrator | ab3a36d8ca61 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_sb_db 2025-09-23 08:07:35.808160 | orchestrator | c620287fe5de registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_nb_db 2025-09-23 08:07:35.808165 | orchestrator | 22d8948630f0 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-09-23 08:07:35.808170 | orchestrator | b9ad36b1f963 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-09-23 08:07:35.808175 | orchestrator | 0ec072e73d9c registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 29 minutes ago Up 29 minutes ceph-mon-testbed-node-1 2025-09-23 08:07:35.808185 | orchestrator | ef6f8f4c3185 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 30 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2025-09-23 08:07:35.808190 | orchestrator | 7fa108e04977 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2025-09-23 08:07:35.808195 | orchestrator | 50aa10226a2a registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2025-09-23 08:07:35.808200 | orchestrator | a01863c03cc0 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2025-09-23 08:07:35.808209 | orchestrator | 0a585513a09e registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2025-09-23 08:07:35.808214 | orchestrator | 20f06be8e9cb registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-09-23 08:07:35.808219 | orchestrator | 10c06e5c11d1 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-09-23 08:07:35.808224 | orchestrator | d13bb30e6801 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-09-23 08:07:36.119005 | orchestrator | 2025-09-23 08:07:36.119100 | orchestrator | ## Images @ testbed-node-1 2025-09-23 08:07:36.119116 | orchestrator | 2025-09-23 08:07:36.119128 | orchestrator | + echo 2025-09-23 08:07:36.119140 | orchestrator | + echo '## Images @ testbed-node-1' 2025-09-23 08:07:36.119153 | orchestrator | + echo 2025-09-23 08:07:36.119164 | orchestrator | + osism container testbed-node-1 images 2025-09-23 08:07:38.575318 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-23 08:07:38.576166 | orchestrator | registry.osism.tech/osism/ceph-daemon reef e9f5591a97f5 5 hours ago 1.27GB 2025-09-23 08:07:38.576200 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 3000ad24ff14 6 hours ago 631MB 2025-09-23 08:07:38.576212 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 1a661fc5b156 6 hours ago 328MB 2025-09-23 08:07:38.576223 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 3bc0ef19ac3e 6 hours ago 321MB 2025-09-23 08:07:38.576233 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 72911f8746e1 6 hours ago 748MB 2025-09-23 08:07:38.576245 | orchestrator | registry.osism.tech/kolla/cron 2024.2 5a589725d8fe 6 hours ago 320MB 2025-09-23 08:07:38.576254 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 6eaddc458de0 6 hours ago 331MB 2025-09-23 08:07:38.576262 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 851486d46497 6 hours ago 1.56GB 2025-09-23 08:07:38.576271 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 aab249247722 6 hours ago 1.59GB 2025-09-23 08:07:38.576280 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 33dbbd6bf609 6 hours ago 420MB 2025-09-23 08:07:38.576289 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 976addeedc66 6 hours ago 1.05GB 2025-09-23 08:07:38.576297 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 a20ee62d6667 6 hours ago 377MB 2025-09-23 08:07:38.576306 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 b0b706b00b76 6 hours ago 593MB 2025-09-23 08:07:38.576315 | orchestrator | registry.osism.tech/kolla/redis 2024.2 312ed77c00f3 6 hours ago 327MB 2025-09-23 08:07:38.576324 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 7827f537ed7d 6 hours ago 327MB 2025-09-23 08:07:38.576349 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 359830abf907 6 hours ago 347MB 2025-09-23 08:07:38.576359 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 098de4c9f434 6 hours ago 356MB 2025-09-23 08:07:38.576368 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 0ee914a8f229 6 hours ago 360MB 2025-09-23 08:07:38.576377 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 44a084c0fe4c 6 hours ago 412MB 2025-09-23 08:07:38.576385 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 d5f1f3b81c72 6 hours ago 353MB 2025-09-23 08:07:38.576412 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 53ccf00c829a 6 hours ago 364MB 2025-09-23 08:07:38.576421 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 77c772c3ac2a 6 hours ago 364MB 2025-09-23 08:07:38.576430 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 ab8c9ccc68cd 6 hours ago 1.21GB 2025-09-23 08:07:38.576439 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 9fe8b2ea3aeb 6 hours ago 949MB 2025-09-23 08:07:38.576448 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 7708bcea1962 6 hours ago 949MB 2025-09-23 08:07:38.576457 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 372f5a1d71bc 6 hours ago 949MB 2025-09-23 08:07:38.576465 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 ea18dcfcc0c2 6 hours ago 949MB 2025-09-23 08:07:38.576474 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 cf3c5c1a3449 6 hours ago 1.25GB 2025-09-23 08:07:38.576483 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 345152f7f511 6 hours ago 1.15GB 2025-09-23 08:07:38.576491 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 0c603c269aa6 6 hours ago 1.16GB 2025-09-23 08:07:38.576500 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 35af7dc2a171 6 hours ago 1.11GB 2025-09-23 08:07:38.576509 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 534fc5014e30 6 hours ago 1.11GB 2025-09-23 08:07:38.576517 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 4fdee6a1d73a 6 hours ago 1.41GB 2025-09-23 08:07:38.576526 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 0022aa6919b2 6 hours ago 1.41GB 2025-09-23 08:07:38.576535 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 3908a81eda36 6 hours ago 1.31GB 2025-09-23 08:07:38.576544 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 9277e4aabe2a 6 hours ago 1.2GB 2025-09-23 08:07:38.576553 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 4f6b401ef119 6 hours ago 1.3GB 2025-09-23 08:07:38.576579 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 94ac82dcca17 6 hours ago 1.42GB 2025-09-23 08:07:38.576588 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 ec611874aa57 6 hours ago 1.3GB 2025-09-23 08:07:38.576597 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 236334a86b49 6 hours ago 1.3GB 2025-09-23 08:07:38.576606 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 300e63005cd6 6 hours ago 1.05GB 2025-09-23 08:07:38.576614 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 1c4a76e66928 6 hours ago 1.05GB 2025-09-23 08:07:38.576623 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 7ac86918728b 6 hours ago 1.06GB 2025-09-23 08:07:38.576631 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 122ea72670a1 6 hours ago 1.05GB 2025-09-23 08:07:38.576640 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 92fa72021016 6 hours ago 1.05GB 2025-09-23 08:07:38.576649 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 ce8035a0bea3 6 hours ago 1.06GB 2025-09-23 08:07:38.576657 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 edd0f2f4839d 6 hours ago 1.06GB 2025-09-23 08:07:38.576666 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 037074b5959f 6 hours ago 1.06GB 2025-09-23 08:07:38.576675 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 cb85568c5d26 6 hours ago 1.06GB 2025-09-23 08:07:38.576684 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 b99e48c2bc9a 6 hours ago 1.04GB 2025-09-23 08:07:38.980324 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-23 08:07:38.980881 | orchestrator | ++ semver latest 5.0.0 2025-09-23 08:07:39.045185 | orchestrator | 2025-09-23 08:07:39.045245 | orchestrator | ## Containers @ testbed-node-2 2025-09-23 08:07:39.045252 | orchestrator | 2025-09-23 08:07:39.045256 | orchestrator | + [[ -1 -eq -1 ]] 2025-09-23 08:07:39.045260 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-23 08:07:39.045264 | orchestrator | + echo 2025-09-23 08:07:39.045269 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-09-23 08:07:39.045273 | orchestrator | + echo 2025-09-23 08:07:39.045278 | orchestrator | + osism container testbed-node-2 ps 2025-09-23 08:07:41.573079 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-23 08:07:41.573175 | orchestrator | aa64fe1efe87 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-09-23 08:07:41.573192 | orchestrator | 709d0226e8ff registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-09-23 08:07:41.573204 | orchestrator | a5bc6c06718e registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-09-23 08:07:41.573216 | orchestrator | d1ed994763db registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_api 2025-09-23 08:07:41.573246 | orchestrator | 5f64c81a5aa1 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-09-23 08:07:41.573259 | orchestrator | 4c42ee7d36e0 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) glance_api 2025-09-23 08:07:41.573270 | orchestrator | 571200ee8bd5 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-09-23 08:07:41.573281 | orchestrator | 2aa77da5b3ad registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-09-23 08:07:41.573292 | orchestrator | 0cc7e92876dc registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_conductor 2025-09-23 08:07:41.573304 | orchestrator | 2d258be45891 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2025-09-23 08:07:41.573315 | orchestrator | 9d038f4150d8 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) placement_api 2025-09-23 08:07:41.573326 | orchestrator | 4f0683ce03ff registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-09-23 08:07:41.573338 | orchestrator | 8872e070f03b registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_elasticsearch_exporter 2025-09-23 08:07:41.573350 | orchestrator | ddf0a0345d01 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_worker 2025-09-23 08:07:41.573361 | orchestrator | 2d01e3a4680e registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2025-09-23 08:07:41.573372 | orchestrator | 35538c9aca61 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2025-09-23 08:07:41.573404 | orchestrator | cc8326ee5772 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2025-09-23 08:07:41.573416 | orchestrator | 9f501fb9cf04 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-09-23 08:07:41.573427 | orchestrator | e1e144fe99f5 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2025-09-23 08:07:41.573438 | orchestrator | 58c195778089 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_api 2025-09-23 08:07:41.573450 | orchestrator | 89718570558f registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_mysqld_exporter 2025-09-23 08:07:41.573478 | orchestrator | 15639a08ad4a registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_node_exporter 2025-09-23 08:07:41.573490 | orchestrator | c518212619b3 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_backend_bind9 2025-09-23 08:07:41.573501 | orchestrator | 81843b600092 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_worker 2025-09-23 08:07:41.573512 | orchestrator | 42ab14cd53a7 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_keystone_listener 2025-09-23 08:07:41.573523 | orchestrator | 64c28562cf59 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2025-09-23 08:07:41.573534 | orchestrator | 929a9cfe1efa registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-2 2025-09-23 08:07:41.573546 | orchestrator | 2c037f685a6a registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-09-23 08:07:41.573557 | orchestrator | fa35c591ad19 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-09-23 08:07:41.573568 | orchestrator | a48b8d81b425 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-09-23 08:07:41.573579 | orchestrator | 709dd8610016 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-09-23 08:07:41.573590 | orchestrator | 80c03346f589 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-09-23 08:07:41.573601 | orchestrator | 42e13c405194 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2025-09-23 08:07:41.573612 | orchestrator | 148236fb11e7 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-09-23 08:07:41.573626 | orchestrator | 1a346c37e76b registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-2 2025-09-23 08:07:41.573652 | orchestrator | f0fb32e33dc4 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-09-23 08:07:41.573672 | orchestrator | 1cb2ffca11db registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-09-23 08:07:41.573687 | orchestrator | 18278ed99cd7 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-09-23 08:07:41.573700 | orchestrator | b3f9207926c9 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2025-09-23 08:07:41.573713 | orchestrator | f9e33b683269 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_sb_db 2025-09-23 08:07:41.573727 | orchestrator | cf883f583dfc registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_nb_db 2025-09-23 08:07:41.573740 | orchestrator | 36d3aa908e59 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-09-23 08:07:41.573753 | orchestrator | 4a21a48215dc registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-09-23 08:07:41.573767 | orchestrator | 829a4f2a9ec6 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 29 minutes ago Up 29 minutes ceph-mon-testbed-node-2 2025-09-23 08:07:41.573788 | orchestrator | 603e93ce9ece registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 30 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2025-09-23 08:07:41.573803 | orchestrator | 3aab9da9e04f registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2025-09-23 08:07:41.573813 | orchestrator | 016610af3099 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2025-09-23 08:07:41.573825 | orchestrator | 5cbbae1f71f5 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2025-09-23 08:07:41.573835 | orchestrator | d302bc4335e7 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2025-09-23 08:07:41.573846 | orchestrator | 710a1f72608b registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 31 minutes ago Up 30 minutes cron 2025-09-23 08:07:41.573857 | orchestrator | 071df028ccaf registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-09-23 08:07:41.573868 | orchestrator | 1ab52909cd94 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-09-23 08:07:41.998881 | orchestrator | 2025-09-23 08:07:41.999055 | orchestrator | ## Images @ testbed-node-2 2025-09-23 08:07:41.999072 | orchestrator | 2025-09-23 08:07:41.999085 | orchestrator | + echo 2025-09-23 08:07:41.999097 | orchestrator | + echo '## Images @ testbed-node-2' 2025-09-23 08:07:41.999109 | orchestrator | + echo 2025-09-23 08:07:41.999121 | orchestrator | + osism container testbed-node-2 images 2025-09-23 08:07:44.380803 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-23 08:07:44.380872 | orchestrator | registry.osism.tech/osism/ceph-daemon reef e9f5591a97f5 5 hours ago 1.27GB 2025-09-23 08:07:44.380878 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 3000ad24ff14 6 hours ago 631MB 2025-09-23 08:07:44.380897 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 1a661fc5b156 6 hours ago 328MB 2025-09-23 08:07:44.380902 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 3bc0ef19ac3e 6 hours ago 321MB 2025-09-23 08:07:44.380906 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 72911f8746e1 6 hours ago 748MB 2025-09-23 08:07:44.380910 | orchestrator | registry.osism.tech/kolla/cron 2024.2 5a589725d8fe 6 hours ago 320MB 2025-09-23 08:07:44.380914 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 6eaddc458de0 6 hours ago 331MB 2025-09-23 08:07:44.380934 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 851486d46497 6 hours ago 1.56GB 2025-09-23 08:07:44.380939 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 aab249247722 6 hours ago 1.59GB 2025-09-23 08:07:44.380943 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 33dbbd6bf609 6 hours ago 420MB 2025-09-23 08:07:44.380947 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 976addeedc66 6 hours ago 1.05GB 2025-09-23 08:07:44.380951 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 a20ee62d6667 6 hours ago 377MB 2025-09-23 08:07:44.380955 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 b0b706b00b76 6 hours ago 593MB 2025-09-23 08:07:44.380959 | orchestrator | registry.osism.tech/kolla/redis 2024.2 312ed77c00f3 6 hours ago 327MB 2025-09-23 08:07:44.380964 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 7827f537ed7d 6 hours ago 327MB 2025-09-23 08:07:44.380967 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 359830abf907 6 hours ago 347MB 2025-09-23 08:07:44.380971 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 098de4c9f434 6 hours ago 356MB 2025-09-23 08:07:44.380975 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 0ee914a8f229 6 hours ago 360MB 2025-09-23 08:07:44.380979 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 44a084c0fe4c 6 hours ago 412MB 2025-09-23 08:07:44.380983 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 d5f1f3b81c72 6 hours ago 353MB 2025-09-23 08:07:44.380987 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 77c772c3ac2a 6 hours ago 364MB 2025-09-23 08:07:44.380990 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 53ccf00c829a 6 hours ago 364MB 2025-09-23 08:07:44.380994 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 ab8c9ccc68cd 6 hours ago 1.21GB 2025-09-23 08:07:44.380998 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 9fe8b2ea3aeb 6 hours ago 949MB 2025-09-23 08:07:44.381002 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 7708bcea1962 6 hours ago 949MB 2025-09-23 08:07:44.381006 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 372f5a1d71bc 6 hours ago 949MB 2025-09-23 08:07:44.381009 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 ea18dcfcc0c2 6 hours ago 949MB 2025-09-23 08:07:44.381013 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 cf3c5c1a3449 6 hours ago 1.25GB 2025-09-23 08:07:44.381017 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 345152f7f511 6 hours ago 1.15GB 2025-09-23 08:07:44.381021 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 0c603c269aa6 6 hours ago 1.16GB 2025-09-23 08:07:44.381037 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 35af7dc2a171 6 hours ago 1.11GB 2025-09-23 08:07:44.381041 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 534fc5014e30 6 hours ago 1.11GB 2025-09-23 08:07:44.381048 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 4fdee6a1d73a 6 hours ago 1.41GB 2025-09-23 08:07:44.381052 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 0022aa6919b2 6 hours ago 1.41GB 2025-09-23 08:07:44.381056 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 3908a81eda36 6 hours ago 1.31GB 2025-09-23 08:07:44.381060 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 9277e4aabe2a 6 hours ago 1.2GB 2025-09-23 08:07:44.381064 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 4f6b401ef119 6 hours ago 1.3GB 2025-09-23 08:07:44.381077 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 94ac82dcca17 6 hours ago 1.42GB 2025-09-23 08:07:44.381081 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 ec611874aa57 6 hours ago 1.3GB 2025-09-23 08:07:44.381085 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 236334a86b49 6 hours ago 1.3GB 2025-09-23 08:07:44.381089 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 300e63005cd6 6 hours ago 1.05GB 2025-09-23 08:07:44.381093 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 1c4a76e66928 6 hours ago 1.05GB 2025-09-23 08:07:44.381097 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 7ac86918728b 6 hours ago 1.06GB 2025-09-23 08:07:44.381100 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 122ea72670a1 6 hours ago 1.05GB 2025-09-23 08:07:44.381104 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 92fa72021016 6 hours ago 1.05GB 2025-09-23 08:07:44.381108 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 ce8035a0bea3 6 hours ago 1.06GB 2025-09-23 08:07:44.381112 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 edd0f2f4839d 6 hours ago 1.06GB 2025-09-23 08:07:44.381116 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 037074b5959f 6 hours ago 1.06GB 2025-09-23 08:07:44.381122 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 cb85568c5d26 6 hours ago 1.06GB 2025-09-23 08:07:44.381126 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 b99e48c2bc9a 6 hours ago 1.04GB 2025-09-23 08:07:44.668192 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-09-23 08:07:44.676743 | orchestrator | + set -e 2025-09-23 08:07:44.676846 | orchestrator | + source /opt/manager-vars.sh 2025-09-23 08:07:44.677804 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-23 08:07:44.677868 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-23 08:07:44.677880 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-23 08:07:44.677891 | orchestrator | ++ CEPH_VERSION=reef 2025-09-23 08:07:44.677903 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-23 08:07:44.677915 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-23 08:07:44.677950 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-23 08:07:44.677962 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-23 08:07:44.677974 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-23 08:07:44.677985 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-23 08:07:44.677996 | orchestrator | ++ export ARA=false 2025-09-23 08:07:44.678007 | orchestrator | ++ ARA=false 2025-09-23 08:07:44.678064 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-23 08:07:44.678077 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-23 08:07:44.678088 | orchestrator | ++ export TEMPEST=false 2025-09-23 08:07:44.678098 | orchestrator | ++ TEMPEST=false 2025-09-23 08:07:44.678109 | orchestrator | ++ export IS_ZUUL=true 2025-09-23 08:07:44.678120 | orchestrator | ++ IS_ZUUL=true 2025-09-23 08:07:44.678131 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2025-09-23 08:07:44.678142 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2025-09-23 08:07:44.678153 | orchestrator | ++ export EXTERNAL_API=false 2025-09-23 08:07:44.678164 | orchestrator | ++ EXTERNAL_API=false 2025-09-23 08:07:44.678175 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-23 08:07:44.678185 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-23 08:07:44.678222 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-23 08:07:44.678233 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-23 08:07:44.678244 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-23 08:07:44.678255 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-23 08:07:44.678328 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-23 08:07:44.678343 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-09-23 08:07:44.689580 | orchestrator | + set -e 2025-09-23 08:07:44.689652 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-23 08:07:44.689665 | orchestrator | ++ export INTERACTIVE=false 2025-09-23 08:07:44.689677 | orchestrator | ++ INTERACTIVE=false 2025-09-23 08:07:44.689688 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-23 08:07:44.689699 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-23 08:07:44.689710 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-09-23 08:07:44.690394 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-09-23 08:07:44.693584 | orchestrator | 2025-09-23 08:07:44.693610 | orchestrator | # Ceph status 2025-09-23 08:07:44.693623 | orchestrator | 2025-09-23 08:07:44.693635 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-23 08:07:44.693646 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-23 08:07:44.693657 | orchestrator | + echo 2025-09-23 08:07:44.693668 | orchestrator | + echo '# Ceph status' 2025-09-23 08:07:44.693680 | orchestrator | + echo 2025-09-23 08:07:44.693691 | orchestrator | + ceph -s 2025-09-23 08:07:45.274216 | orchestrator | cluster: 2025-09-23 08:07:45.274336 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-09-23 08:07:45.274354 | orchestrator | health: HEALTH_OK 2025-09-23 08:07:45.274367 | orchestrator | 2025-09-23 08:07:45.274378 | orchestrator | services: 2025-09-23 08:07:45.274390 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 29m) 2025-09-23 08:07:45.274403 | orchestrator | mgr: testbed-node-1(active, since 16m), standbys: testbed-node-2, testbed-node-0 2025-09-23 08:07:45.274415 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-09-23 08:07:45.274427 | orchestrator | osd: 6 osds: 6 up (since 25m), 6 in (since 25m) 2025-09-23 08:07:45.274438 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-09-23 08:07:45.274450 | orchestrator | 2025-09-23 08:07:45.274461 | orchestrator | data: 2025-09-23 08:07:45.274472 | orchestrator | volumes: 1/1 healthy 2025-09-23 08:07:45.274483 | orchestrator | pools: 14 pools, 401 pgs 2025-09-23 08:07:45.274494 | orchestrator | objects: 522 objects, 2.2 GiB 2025-09-23 08:07:45.274505 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-09-23 08:07:45.274516 | orchestrator | pgs: 401 active+clean 2025-09-23 08:07:45.274527 | orchestrator | 2025-09-23 08:07:45.319965 | orchestrator | 2025-09-23 08:07:45.320055 | orchestrator | # Ceph versions 2025-09-23 08:07:45.320069 | orchestrator | 2025-09-23 08:07:45.320082 | orchestrator | + echo 2025-09-23 08:07:45.320093 | orchestrator | + echo '# Ceph versions' 2025-09-23 08:07:45.320105 | orchestrator | + echo 2025-09-23 08:07:45.320117 | orchestrator | + ceph versions 2025-09-23 08:07:45.922277 | orchestrator | { 2025-09-23 08:07:45.922377 | orchestrator | "mon": { 2025-09-23 08:07:45.922393 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-23 08:07:45.922406 | orchestrator | }, 2025-09-23 08:07:45.922417 | orchestrator | "mgr": { 2025-09-23 08:07:45.922429 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-23 08:07:45.922440 | orchestrator | }, 2025-09-23 08:07:45.922451 | orchestrator | "osd": { 2025-09-23 08:07:45.922462 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-09-23 08:07:45.922473 | orchestrator | }, 2025-09-23 08:07:45.922484 | orchestrator | "mds": { 2025-09-23 08:07:45.922495 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-23 08:07:45.922506 | orchestrator | }, 2025-09-23 08:07:45.922517 | orchestrator | "rgw": { 2025-09-23 08:07:45.922529 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-23 08:07:45.922539 | orchestrator | }, 2025-09-23 08:07:45.922550 | orchestrator | "overall": { 2025-09-23 08:07:45.922562 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-09-23 08:07:45.922573 | orchestrator | } 2025-09-23 08:07:45.922584 | orchestrator | } 2025-09-23 08:07:45.968857 | orchestrator | 2025-09-23 08:07:45.969011 | orchestrator | # Ceph OSD tree 2025-09-23 08:07:45.969031 | orchestrator | 2025-09-23 08:07:45.969044 | orchestrator | + echo 2025-09-23 08:07:45.969079 | orchestrator | + echo '# Ceph OSD tree' 2025-09-23 08:07:45.969092 | orchestrator | + echo 2025-09-23 08:07:45.969103 | orchestrator | + ceph osd df tree 2025-09-23 08:07:46.562682 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-09-23 08:07:46.562794 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2025-09-23 08:07:46.562810 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2025-09-23 08:07:46.562822 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 6.89 1.16 206 up osd.2 2025-09-23 08:07:46.562833 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1012 MiB 939 MiB 1 KiB 74 MiB 19 GiB 4.95 0.84 186 up osd.5 2025-09-23 08:07:46.562844 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-09-23 08:07:46.562855 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.6 GiB 1.5 GiB 1 KiB 70 MiB 18 GiB 8.10 1.37 200 up osd.0 2025-09-23 08:07:46.562866 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 764 MiB 691 MiB 1 KiB 74 MiB 19 GiB 3.74 0.63 190 up osd.4 2025-09-23 08:07:46.562878 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-09-23 08:07:46.562888 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 74 MiB 19 GiB 6.86 1.16 184 up osd.1 2025-09-23 08:07:46.562899 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1017 MiB 947 MiB 1 KiB 70 MiB 19 GiB 4.97 0.84 204 up osd.3 2025-09-23 08:07:46.562911 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2025-09-23 08:07:46.562923 | orchestrator | MIN/MAX VAR: 0.63/1.37 STDDEV: 1.48 2025-09-23 08:07:46.608053 | orchestrator | 2025-09-23 08:07:46.608115 | orchestrator | # Ceph monitor status 2025-09-23 08:07:46.608128 | orchestrator | 2025-09-23 08:07:46.608139 | orchestrator | + echo 2025-09-23 08:07:46.608151 | orchestrator | + echo '# Ceph monitor status' 2025-09-23 08:07:46.608162 | orchestrator | + echo 2025-09-23 08:07:46.608173 | orchestrator | + ceph mon stat 2025-09-23 08:07:47.302192 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 6, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-09-23 08:07:47.350733 | orchestrator | 2025-09-23 08:07:47.350807 | orchestrator | # Ceph quorum status 2025-09-23 08:07:47.350816 | orchestrator | 2025-09-23 08:07:47.350823 | orchestrator | + echo 2025-09-23 08:07:47.350831 | orchestrator | + echo '# Ceph quorum status' 2025-09-23 08:07:47.350837 | orchestrator | + echo 2025-09-23 08:07:47.351121 | orchestrator | + ceph quorum_status 2025-09-23 08:07:47.351652 | orchestrator | + jq 2025-09-23 08:07:47.966252 | orchestrator | { 2025-09-23 08:07:47.966365 | orchestrator | "election_epoch": 6, 2025-09-23 08:07:47.966382 | orchestrator | "quorum": [ 2025-09-23 08:07:47.966421 | orchestrator | 0, 2025-09-23 08:07:47.966433 | orchestrator | 1, 2025-09-23 08:07:47.966444 | orchestrator | 2 2025-09-23 08:07:47.966455 | orchestrator | ], 2025-09-23 08:07:47.966469 | orchestrator | "quorum_names": [ 2025-09-23 08:07:47.966489 | orchestrator | "testbed-node-0", 2025-09-23 08:07:47.966513 | orchestrator | "testbed-node-1", 2025-09-23 08:07:47.966540 | orchestrator | "testbed-node-2" 2025-09-23 08:07:47.966559 | orchestrator | ], 2025-09-23 08:07:47.966577 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-09-23 08:07:47.966596 | orchestrator | "quorum_age": 1743, 2025-09-23 08:07:47.966612 | orchestrator | "features": { 2025-09-23 08:07:47.966629 | orchestrator | "quorum_con": "4540138322906710015", 2025-09-23 08:07:47.966646 | orchestrator | "quorum_mon": [ 2025-09-23 08:07:47.966664 | orchestrator | "kraken", 2025-09-23 08:07:47.966682 | orchestrator | "luminous", 2025-09-23 08:07:47.966702 | orchestrator | "mimic", 2025-09-23 08:07:47.966752 | orchestrator | "osdmap-prune", 2025-09-23 08:07:47.966772 | orchestrator | "nautilus", 2025-09-23 08:07:47.966791 | orchestrator | "octopus", 2025-09-23 08:07:47.966809 | orchestrator | "pacific", 2025-09-23 08:07:47.966827 | orchestrator | "elector-pinging", 2025-09-23 08:07:47.966843 | orchestrator | "quincy", 2025-09-23 08:07:47.966854 | orchestrator | "reef" 2025-09-23 08:07:47.966865 | orchestrator | ] 2025-09-23 08:07:47.966875 | orchestrator | }, 2025-09-23 08:07:47.966886 | orchestrator | "monmap": { 2025-09-23 08:07:47.966897 | orchestrator | "epoch": 1, 2025-09-23 08:07:47.966908 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-09-23 08:07:47.966920 | orchestrator | "modified": "2025-09-23T07:38:23.051273Z", 2025-09-23 08:07:47.966956 | orchestrator | "created": "2025-09-23T07:38:23.051273Z", 2025-09-23 08:07:47.966968 | orchestrator | "min_mon_release": 18, 2025-09-23 08:07:47.966979 | orchestrator | "min_mon_release_name": "reef", 2025-09-23 08:07:47.966990 | orchestrator | "election_strategy": 1, 2025-09-23 08:07:47.967001 | orchestrator | "disallowed_leaders: ": "", 2025-09-23 08:07:47.967012 | orchestrator | "stretch_mode": false, 2025-09-23 08:07:47.967023 | orchestrator | "tiebreaker_mon": "", 2025-09-23 08:07:47.967034 | orchestrator | "removed_ranks: ": "", 2025-09-23 08:07:47.967044 | orchestrator | "features": { 2025-09-23 08:07:47.967055 | orchestrator | "persistent": [ 2025-09-23 08:07:47.967066 | orchestrator | "kraken", 2025-09-23 08:07:47.967077 | orchestrator | "luminous", 2025-09-23 08:07:47.967087 | orchestrator | "mimic", 2025-09-23 08:07:47.967098 | orchestrator | "osdmap-prune", 2025-09-23 08:07:47.967109 | orchestrator | "nautilus", 2025-09-23 08:07:47.967120 | orchestrator | "octopus", 2025-09-23 08:07:47.967131 | orchestrator | "pacific", 2025-09-23 08:07:47.967142 | orchestrator | "elector-pinging", 2025-09-23 08:07:47.967153 | orchestrator | "quincy", 2025-09-23 08:07:47.967163 | orchestrator | "reef" 2025-09-23 08:07:47.967174 | orchestrator | ], 2025-09-23 08:07:47.967185 | orchestrator | "optional": [] 2025-09-23 08:07:47.967196 | orchestrator | }, 2025-09-23 08:07:47.967207 | orchestrator | "mons": [ 2025-09-23 08:07:47.967217 | orchestrator | { 2025-09-23 08:07:47.967228 | orchestrator | "rank": 0, 2025-09-23 08:07:47.967239 | orchestrator | "name": "testbed-node-0", 2025-09-23 08:07:47.967250 | orchestrator | "public_addrs": { 2025-09-23 08:07:47.967261 | orchestrator | "addrvec": [ 2025-09-23 08:07:47.967271 | orchestrator | { 2025-09-23 08:07:47.967282 | orchestrator | "type": "v2", 2025-09-23 08:07:47.967293 | orchestrator | "addr": "192.168.16.10:3300", 2025-09-23 08:07:47.967304 | orchestrator | "nonce": 0 2025-09-23 08:07:47.967315 | orchestrator | }, 2025-09-23 08:07:47.967325 | orchestrator | { 2025-09-23 08:07:47.967336 | orchestrator | "type": "v1", 2025-09-23 08:07:47.967347 | orchestrator | "addr": "192.168.16.10:6789", 2025-09-23 08:07:47.967358 | orchestrator | "nonce": 0 2025-09-23 08:07:47.967369 | orchestrator | } 2025-09-23 08:07:47.967380 | orchestrator | ] 2025-09-23 08:07:47.967390 | orchestrator | }, 2025-09-23 08:07:47.967401 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-09-23 08:07:47.967412 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-09-23 08:07:47.967423 | orchestrator | "priority": 0, 2025-09-23 08:07:47.967434 | orchestrator | "weight": 0, 2025-09-23 08:07:47.967444 | orchestrator | "crush_location": "{}" 2025-09-23 08:07:47.967455 | orchestrator | }, 2025-09-23 08:07:47.967466 | orchestrator | { 2025-09-23 08:07:47.967477 | orchestrator | "rank": 1, 2025-09-23 08:07:47.967488 | orchestrator | "name": "testbed-node-1", 2025-09-23 08:07:47.967499 | orchestrator | "public_addrs": { 2025-09-23 08:07:47.967510 | orchestrator | "addrvec": [ 2025-09-23 08:07:47.967521 | orchestrator | { 2025-09-23 08:07:47.967532 | orchestrator | "type": "v2", 2025-09-23 08:07:47.967543 | orchestrator | "addr": "192.168.16.11:3300", 2025-09-23 08:07:47.967553 | orchestrator | "nonce": 0 2025-09-23 08:07:47.967564 | orchestrator | }, 2025-09-23 08:07:47.967575 | orchestrator | { 2025-09-23 08:07:47.967586 | orchestrator | "type": "v1", 2025-09-23 08:07:47.967597 | orchestrator | "addr": "192.168.16.11:6789", 2025-09-23 08:07:47.967607 | orchestrator | "nonce": 0 2025-09-23 08:07:47.967618 | orchestrator | } 2025-09-23 08:07:47.967629 | orchestrator | ] 2025-09-23 08:07:47.967639 | orchestrator | }, 2025-09-23 08:07:47.967650 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-09-23 08:07:47.967662 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-09-23 08:07:47.967681 | orchestrator | "priority": 0, 2025-09-23 08:07:47.967692 | orchestrator | "weight": 0, 2025-09-23 08:07:47.967703 | orchestrator | "crush_location": "{}" 2025-09-23 08:07:47.967714 | orchestrator | }, 2025-09-23 08:07:47.967725 | orchestrator | { 2025-09-23 08:07:47.967736 | orchestrator | "rank": 2, 2025-09-23 08:07:47.967747 | orchestrator | "name": "testbed-node-2", 2025-09-23 08:07:47.967757 | orchestrator | "public_addrs": { 2025-09-23 08:07:47.967768 | orchestrator | "addrvec": [ 2025-09-23 08:07:47.967779 | orchestrator | { 2025-09-23 08:07:47.967790 | orchestrator | "type": "v2", 2025-09-23 08:07:47.967801 | orchestrator | "addr": "192.168.16.12:3300", 2025-09-23 08:07:47.967811 | orchestrator | "nonce": 0 2025-09-23 08:07:47.967822 | orchestrator | }, 2025-09-23 08:07:47.967833 | orchestrator | { 2025-09-23 08:07:47.967844 | orchestrator | "type": "v1", 2025-09-23 08:07:47.967855 | orchestrator | "addr": "192.168.16.12:6789", 2025-09-23 08:07:47.967866 | orchestrator | "nonce": 0 2025-09-23 08:07:47.967876 | orchestrator | } 2025-09-23 08:07:47.967887 | orchestrator | ] 2025-09-23 08:07:47.967898 | orchestrator | }, 2025-09-23 08:07:47.967909 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-09-23 08:07:47.967920 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-09-23 08:07:47.967978 | orchestrator | "priority": 0, 2025-09-23 08:07:47.967992 | orchestrator | "weight": 0, 2025-09-23 08:07:47.968003 | orchestrator | "crush_location": "{}" 2025-09-23 08:07:47.968014 | orchestrator | } 2025-09-23 08:07:47.968025 | orchestrator | ] 2025-09-23 08:07:47.968036 | orchestrator | } 2025-09-23 08:07:47.968046 | orchestrator | } 2025-09-23 08:07:47.968057 | orchestrator | 2025-09-23 08:07:47.968069 | orchestrator | + echo 2025-09-23 08:07:47.968080 | orchestrator | + echo '# Ceph free space status' 2025-09-23 08:07:47.968091 | orchestrator | # Ceph free space status 2025-09-23 08:07:47.968102 | orchestrator | 2025-09-23 08:07:47.968113 | orchestrator | + echo 2025-09-23 08:07:47.968124 | orchestrator | + ceph df 2025-09-23 08:07:48.555103 | orchestrator | --- RAW STORAGE --- 2025-09-23 08:07:48.555193 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-09-23 08:07:48.555230 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-09-23 08:07:48.555242 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-09-23 08:07:48.555252 | orchestrator | 2025-09-23 08:07:48.555262 | orchestrator | --- POOLS --- 2025-09-23 08:07:48.555273 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-09-23 08:07:48.555284 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2025-09-23 08:07:48.555294 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-09-23 08:07:48.555304 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-09-23 08:07:48.555314 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-09-23 08:07:48.555323 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-09-23 08:07:48.555333 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-09-23 08:07:48.555343 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-09-23 08:07:48.555353 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-09-23 08:07:48.555362 | orchestrator | .rgw.root 9 32 2.6 KiB 6 48 KiB 0 52 GiB 2025-09-23 08:07:48.555372 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-09-23 08:07:48.555382 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-09-23 08:07:48.555392 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 6.01 35 GiB 2025-09-23 08:07:48.555401 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-09-23 08:07:48.555411 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-09-23 08:07:48.601742 | orchestrator | ++ semver latest 5.0.0 2025-09-23 08:07:48.667316 | orchestrator | + [[ -1 -eq -1 ]] 2025-09-23 08:07:48.667396 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-23 08:07:48.667409 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-09-23 08:07:48.667420 | orchestrator | + osism apply facts 2025-09-23 08:08:00.739885 | orchestrator | 2025-09-23 08:08:00 | INFO  | Task ddf589b5-a6e3-4205-9f19-78aa9eb716f1 (facts) was prepared for execution. 2025-09-23 08:08:00.740003 | orchestrator | 2025-09-23 08:08:00 | INFO  | It takes a moment until task ddf589b5-a6e3-4205-9f19-78aa9eb716f1 (facts) has been started and output is visible here. 2025-09-23 08:08:14.382866 | orchestrator | 2025-09-23 08:08:14.382983 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-23 08:08:14.383000 | orchestrator | 2025-09-23 08:08:14.383013 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-23 08:08:14.383067 | orchestrator | Tuesday 23 September 2025 08:08:04 +0000 (0:00:00.270) 0:00:00.270 ***** 2025-09-23 08:08:14.383080 | orchestrator | ok: [testbed-manager] 2025-09-23 08:08:14.383092 | orchestrator | ok: [testbed-node-0] 2025-09-23 08:08:14.383103 | orchestrator | ok: [testbed-node-1] 2025-09-23 08:08:14.383114 | orchestrator | ok: [testbed-node-2] 2025-09-23 08:08:14.383125 | orchestrator | ok: [testbed-node-3] 2025-09-23 08:08:14.383136 | orchestrator | ok: [testbed-node-4] 2025-09-23 08:08:14.383147 | orchestrator | ok: [testbed-node-5] 2025-09-23 08:08:14.383158 | orchestrator | 2025-09-23 08:08:14.383169 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-23 08:08:14.383181 | orchestrator | Tuesday 23 September 2025 08:08:06 +0000 (0:00:01.443) 0:00:01.714 ***** 2025-09-23 08:08:14.383192 | orchestrator | skipping: [testbed-manager] 2025-09-23 08:08:14.383208 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:08:14.383227 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:08:14.383246 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:08:14.383265 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:08:14.383286 | orchestrator | skipping: [testbed-node-4] 2025-09-23 08:08:14.383305 | orchestrator | skipping: [testbed-node-5] 2025-09-23 08:08:14.383323 | orchestrator | 2025-09-23 08:08:14.383336 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-23 08:08:14.383347 | orchestrator | 2025-09-23 08:08:14.383358 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-23 08:08:14.383369 | orchestrator | Tuesday 23 September 2025 08:08:07 +0000 (0:00:01.252) 0:00:02.967 ***** 2025-09-23 08:08:14.383380 | orchestrator | ok: [testbed-node-2] 2025-09-23 08:08:14.383392 | orchestrator | ok: [testbed-node-0] 2025-09-23 08:08:14.383406 | orchestrator | ok: [testbed-node-1] 2025-09-23 08:08:14.383419 | orchestrator | ok: [testbed-node-3] 2025-09-23 08:08:14.383431 | orchestrator | ok: [testbed-node-4] 2025-09-23 08:08:14.383444 | orchestrator | ok: [testbed-node-5] 2025-09-23 08:08:14.383457 | orchestrator | ok: [testbed-manager] 2025-09-23 08:08:14.383470 | orchestrator | 2025-09-23 08:08:14.383484 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-23 08:08:14.383497 | orchestrator | 2025-09-23 08:08:14.383510 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-23 08:08:14.383523 | orchestrator | Tuesday 23 September 2025 08:08:13 +0000 (0:00:05.836) 0:00:08.803 ***** 2025-09-23 08:08:14.383536 | orchestrator | skipping: [testbed-manager] 2025-09-23 08:08:14.383549 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:08:14.383560 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:08:14.383572 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:08:14.383583 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:08:14.383594 | orchestrator | skipping: [testbed-node-4] 2025-09-23 08:08:14.383605 | orchestrator | skipping: [testbed-node-5] 2025-09-23 08:08:14.383616 | orchestrator | 2025-09-23 08:08:14.383627 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 08:08:14.383639 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 08:08:14.383652 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 08:08:14.383689 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 08:08:14.383701 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 08:08:14.383713 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 08:08:14.383724 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 08:08:14.383735 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 08:08:14.383747 | orchestrator | 2025-09-23 08:08:14.383758 | orchestrator | 2025-09-23 08:08:14.383769 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 08:08:14.383780 | orchestrator | Tuesday 23 September 2025 08:08:13 +0000 (0:00:00.576) 0:00:09.380 ***** 2025-09-23 08:08:14.383809 | orchestrator | =============================================================================== 2025-09-23 08:08:14.383821 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.84s 2025-09-23 08:08:14.383832 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.44s 2025-09-23 08:08:14.383843 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.25s 2025-09-23 08:08:14.383854 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.58s 2025-09-23 08:08:14.701682 | orchestrator | + osism validate ceph-mons 2025-09-23 08:08:44.082950 | orchestrator | 2025-09-23 08:08:44.083052 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-09-23 08:08:44.083068 | orchestrator | 2025-09-23 08:08:44.083080 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-09-23 08:08:44.083091 | orchestrator | Tuesday 23 September 2025 08:08:28 +0000 (0:00:00.413) 0:00:00.413 ***** 2025-09-23 08:08:44.083103 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-23 08:08:44.083114 | orchestrator | 2025-09-23 08:08:44.083183 | orchestrator | TASK [Create report output directory] ****************************************** 2025-09-23 08:08:44.083195 | orchestrator | Tuesday 23 September 2025 08:08:28 +0000 (0:00:00.578) 0:00:00.991 ***** 2025-09-23 08:08:44.083206 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-23 08:08:44.083217 | orchestrator | 2025-09-23 08:08:44.083228 | orchestrator | TASK [Define report vars] ****************************************************** 2025-09-23 08:08:44.083239 | orchestrator | Tuesday 23 September 2025 08:08:29 +0000 (0:00:00.725) 0:00:01.716 ***** 2025-09-23 08:08:44.083251 | orchestrator | ok: [testbed-node-0] 2025-09-23 08:08:44.083263 | orchestrator | 2025-09-23 08:08:44.083274 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-09-23 08:08:44.083285 | orchestrator | Tuesday 23 September 2025 08:08:29 +0000 (0:00:00.187) 0:00:01.904 ***** 2025-09-23 08:08:44.083296 | orchestrator | ok: [testbed-node-0] 2025-09-23 08:08:44.083307 | orchestrator | ok: [testbed-node-1] 2025-09-23 08:08:44.083319 | orchestrator | ok: [testbed-node-2] 2025-09-23 08:08:44.083330 | orchestrator | 2025-09-23 08:08:44.083342 | orchestrator | TASK [Get container info] ****************************************************** 2025-09-23 08:08:44.083353 | orchestrator | Tuesday 23 September 2025 08:08:30 +0000 (0:00:00.261) 0:00:02.165 ***** 2025-09-23 08:08:44.083364 | orchestrator | ok: [testbed-node-0] 2025-09-23 08:08:44.083374 | orchestrator | ok: [testbed-node-1] 2025-09-23 08:08:44.083385 | orchestrator | ok: [testbed-node-2] 2025-09-23 08:08:44.083396 | orchestrator | 2025-09-23 08:08:44.083407 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-09-23 08:08:44.083418 | orchestrator | Tuesday 23 September 2025 08:08:31 +0000 (0:00:00.980) 0:00:03.145 ***** 2025-09-23 08:08:44.083429 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:08:44.083464 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:08:44.083478 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:08:44.083491 | orchestrator | 2025-09-23 08:08:44.083504 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-09-23 08:08:44.083517 | orchestrator | Tuesday 23 September 2025 08:08:31 +0000 (0:00:00.263) 0:00:03.409 ***** 2025-09-23 08:08:44.083530 | orchestrator | ok: [testbed-node-0] 2025-09-23 08:08:44.083543 | orchestrator | ok: [testbed-node-1] 2025-09-23 08:08:44.083554 | orchestrator | ok: [testbed-node-2] 2025-09-23 08:08:44.083565 | orchestrator | 2025-09-23 08:08:44.083576 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-23 08:08:44.083587 | orchestrator | Tuesday 23 September 2025 08:08:31 +0000 (0:00:00.399) 0:00:03.808 ***** 2025-09-23 08:08:44.083598 | orchestrator | ok: [testbed-node-0] 2025-09-23 08:08:44.083609 | orchestrator | ok: [testbed-node-1] 2025-09-23 08:08:44.083620 | orchestrator | ok: [testbed-node-2] 2025-09-23 08:08:44.083631 | orchestrator | 2025-09-23 08:08:44.083641 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-09-23 08:08:44.083652 | orchestrator | Tuesday 23 September 2025 08:08:32 +0000 (0:00:00.278) 0:00:04.087 ***** 2025-09-23 08:08:44.083663 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:08:44.083674 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:08:44.083685 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:08:44.083696 | orchestrator | 2025-09-23 08:08:44.083707 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-09-23 08:08:44.083718 | orchestrator | Tuesday 23 September 2025 08:08:32 +0000 (0:00:00.264) 0:00:04.351 ***** 2025-09-23 08:08:44.083729 | orchestrator | ok: [testbed-node-0] 2025-09-23 08:08:44.083740 | orchestrator | ok: [testbed-node-1] 2025-09-23 08:08:44.083750 | orchestrator | ok: [testbed-node-2] 2025-09-23 08:08:44.083761 | orchestrator | 2025-09-23 08:08:44.083772 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-23 08:08:44.083783 | orchestrator | Tuesday 23 September 2025 08:08:32 +0000 (0:00:00.292) 0:00:04.643 ***** 2025-09-23 08:08:44.083794 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:08:44.083805 | orchestrator | 2025-09-23 08:08:44.083830 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-23 08:08:44.083841 | orchestrator | Tuesday 23 September 2025 08:08:32 +0000 (0:00:00.244) 0:00:04.887 ***** 2025-09-23 08:08:44.083852 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:08:44.083863 | orchestrator | 2025-09-23 08:08:44.083874 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-23 08:08:44.083885 | orchestrator | Tuesday 23 September 2025 08:08:33 +0000 (0:00:00.498) 0:00:05.385 ***** 2025-09-23 08:08:44.083895 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:08:44.083906 | orchestrator | 2025-09-23 08:08:44.083917 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-23 08:08:44.083928 | orchestrator | Tuesday 23 September 2025 08:08:34 +0000 (0:00:00.658) 0:00:06.043 ***** 2025-09-23 08:08:44.083939 | orchestrator | 2025-09-23 08:08:44.083950 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-23 08:08:44.083961 | orchestrator | Tuesday 23 September 2025 08:08:34 +0000 (0:00:00.069) 0:00:06.112 ***** 2025-09-23 08:08:44.083972 | orchestrator | 2025-09-23 08:08:44.083982 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-23 08:08:44.083993 | orchestrator | Tuesday 23 September 2025 08:08:34 +0000 (0:00:00.069) 0:00:06.182 ***** 2025-09-23 08:08:44.084004 | orchestrator | 2025-09-23 08:08:44.084015 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-23 08:08:44.084026 | orchestrator | Tuesday 23 September 2025 08:08:34 +0000 (0:00:00.073) 0:00:06.255 ***** 2025-09-23 08:08:44.084036 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:08:44.084047 | orchestrator | 2025-09-23 08:08:44.084058 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-09-23 08:08:44.084069 | orchestrator | Tuesday 23 September 2025 08:08:34 +0000 (0:00:00.264) 0:00:06.520 ***** 2025-09-23 08:08:44.084087 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:08:44.084099 | orchestrator | 2025-09-23 08:08:44.084147 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-09-23 08:08:44.084160 | orchestrator | Tuesday 23 September 2025 08:08:34 +0000 (0:00:00.262) 0:00:06.783 ***** 2025-09-23 08:08:44.084171 | orchestrator | ok: [testbed-node-0] 2025-09-23 08:08:44.084182 | orchestrator | 2025-09-23 08:08:44.084192 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-09-23 08:08:44.084203 | orchestrator | Tuesday 23 September 2025 08:08:34 +0000 (0:00:00.136) 0:00:06.920 ***** 2025-09-23 08:08:44.084214 | orchestrator | changed: [testbed-node-0] 2025-09-23 08:08:44.084225 | orchestrator | 2025-09-23 08:08:44.084236 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-09-23 08:08:44.084247 | orchestrator | Tuesday 23 September 2025 08:08:36 +0000 (0:00:01.795) 0:00:08.715 ***** 2025-09-23 08:08:44.084258 | orchestrator | ok: [testbed-node-0] 2025-09-23 08:08:44.084268 | orchestrator | 2025-09-23 08:08:44.084279 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-09-23 08:08:44.084290 | orchestrator | Tuesday 23 September 2025 08:08:37 +0000 (0:00:00.345) 0:00:09.061 ***** 2025-09-23 08:08:44.084300 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:08:44.084311 | orchestrator | 2025-09-23 08:08:44.084322 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-09-23 08:08:44.084333 | orchestrator | Tuesday 23 September 2025 08:08:37 +0000 (0:00:00.132) 0:00:09.193 ***** 2025-09-23 08:08:44.084343 | orchestrator | ok: [testbed-node-0] 2025-09-23 08:08:44.084354 | orchestrator | 2025-09-23 08:08:44.084365 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-09-23 08:08:44.084376 | orchestrator | Tuesday 23 September 2025 08:08:37 +0000 (0:00:00.334) 0:00:09.528 ***** 2025-09-23 08:08:44.084387 | orchestrator | ok: [testbed-node-0] 2025-09-23 08:08:44.084397 | orchestrator | 2025-09-23 08:08:44.084408 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-09-23 08:08:44.084419 | orchestrator | Tuesday 23 September 2025 08:08:38 +0000 (0:00:00.508) 0:00:10.037 ***** 2025-09-23 08:08:44.084430 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:08:44.084440 | orchestrator | 2025-09-23 08:08:44.084451 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-09-23 08:08:44.084462 | orchestrator | Tuesday 23 September 2025 08:08:38 +0000 (0:00:00.128) 0:00:10.165 ***** 2025-09-23 08:08:44.084473 | orchestrator | ok: [testbed-node-0] 2025-09-23 08:08:44.084483 | orchestrator | 2025-09-23 08:08:44.084494 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-09-23 08:08:44.084505 | orchestrator | Tuesday 23 September 2025 08:08:38 +0000 (0:00:00.122) 0:00:10.288 ***** 2025-09-23 08:08:44.084516 | orchestrator | ok: [testbed-node-0] 2025-09-23 08:08:44.084526 | orchestrator | 2025-09-23 08:08:44.084537 | orchestrator | TASK [Gather status data] ****************************************************** 2025-09-23 08:08:44.084548 | orchestrator | Tuesday 23 September 2025 08:08:38 +0000 (0:00:00.111) 0:00:10.400 ***** 2025-09-23 08:08:44.084559 | orchestrator | changed: [testbed-node-0] 2025-09-23 08:08:44.084569 | orchestrator | 2025-09-23 08:08:44.084580 | orchestrator | TASK [Set health test data] **************************************************** 2025-09-23 08:08:44.084591 | orchestrator | Tuesday 23 September 2025 08:08:39 +0000 (0:00:01.359) 0:00:11.759 ***** 2025-09-23 08:08:44.084602 | orchestrator | ok: [testbed-node-0] 2025-09-23 08:08:44.084612 | orchestrator | 2025-09-23 08:08:44.084623 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-09-23 08:08:44.084634 | orchestrator | Tuesday 23 September 2025 08:08:40 +0000 (0:00:00.299) 0:00:12.058 ***** 2025-09-23 08:08:44.084645 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:08:44.084655 | orchestrator | 2025-09-23 08:08:44.084666 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-09-23 08:08:44.084677 | orchestrator | Tuesday 23 September 2025 08:08:40 +0000 (0:00:00.135) 0:00:12.193 ***** 2025-09-23 08:08:44.084696 | orchestrator | ok: [testbed-node-0] 2025-09-23 08:08:44.084707 | orchestrator | 2025-09-23 08:08:44.084718 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-09-23 08:08:44.084729 | orchestrator | Tuesday 23 September 2025 08:08:40 +0000 (0:00:00.143) 0:00:12.336 ***** 2025-09-23 08:08:44.084739 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:08:44.084750 | orchestrator | 2025-09-23 08:08:44.084761 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-09-23 08:08:44.084772 | orchestrator | Tuesday 23 September 2025 08:08:40 +0000 (0:00:00.151) 0:00:12.488 ***** 2025-09-23 08:08:44.084783 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:08:44.084794 | orchestrator | 2025-09-23 08:08:44.084805 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-09-23 08:08:44.084815 | orchestrator | Tuesday 23 September 2025 08:08:40 +0000 (0:00:00.144) 0:00:12.632 ***** 2025-09-23 08:08:44.084826 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-23 08:08:44.084837 | orchestrator | 2025-09-23 08:08:44.084848 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-09-23 08:08:44.084858 | orchestrator | Tuesday 23 September 2025 08:08:40 +0000 (0:00:00.249) 0:00:12.882 ***** 2025-09-23 08:08:44.084869 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:08:44.084880 | orchestrator | 2025-09-23 08:08:44.084891 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-23 08:08:44.084902 | orchestrator | Tuesday 23 September 2025 08:08:41 +0000 (0:00:00.489) 0:00:13.372 ***** 2025-09-23 08:08:44.084912 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-23 08:08:44.084923 | orchestrator | 2025-09-23 08:08:44.084934 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-23 08:08:44.084945 | orchestrator | Tuesday 23 September 2025 08:08:43 +0000 (0:00:01.959) 0:00:15.331 ***** 2025-09-23 08:08:44.084956 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-23 08:08:44.084967 | orchestrator | 2025-09-23 08:08:44.084978 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-23 08:08:44.084989 | orchestrator | Tuesday 23 September 2025 08:08:43 +0000 (0:00:00.255) 0:00:15.587 ***** 2025-09-23 08:08:44.084999 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-23 08:08:44.085010 | orchestrator | 2025-09-23 08:08:44.085028 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-23 08:08:46.408713 | orchestrator | Tuesday 23 September 2025 08:08:43 +0000 (0:00:00.284) 0:00:15.871 ***** 2025-09-23 08:08:46.408817 | orchestrator | 2025-09-23 08:08:46.408833 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-23 08:08:46.408845 | orchestrator | Tuesday 23 September 2025 08:08:43 +0000 (0:00:00.069) 0:00:15.941 ***** 2025-09-23 08:08:46.408856 | orchestrator | 2025-09-23 08:08:46.408868 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-23 08:08:46.408912 | orchestrator | Tuesday 23 September 2025 08:08:43 +0000 (0:00:00.068) 0:00:16.009 ***** 2025-09-23 08:08:46.408924 | orchestrator | 2025-09-23 08:08:46.408935 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-09-23 08:08:46.408947 | orchestrator | Tuesday 23 September 2025 08:08:44 +0000 (0:00:00.089) 0:00:16.099 ***** 2025-09-23 08:08:46.408958 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-23 08:08:46.408969 | orchestrator | 2025-09-23 08:08:46.408980 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-23 08:08:46.408991 | orchestrator | Tuesday 23 September 2025 08:08:45 +0000 (0:00:01.383) 0:00:17.482 ***** 2025-09-23 08:08:46.409003 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-09-23 08:08:46.409022 | orchestrator |  "msg": [ 2025-09-23 08:08:46.409035 | orchestrator |  "Validator run completed.", 2025-09-23 08:08:46.409047 | orchestrator |  "You can find the report file here:", 2025-09-23 08:08:46.409082 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-09-23T08:08:28+00:00-report.json", 2025-09-23 08:08:46.409095 | orchestrator |  "on the following host:", 2025-09-23 08:08:46.409107 | orchestrator |  "testbed-manager" 2025-09-23 08:08:46.409118 | orchestrator |  ] 2025-09-23 08:08:46.409191 | orchestrator | } 2025-09-23 08:08:46.409203 | orchestrator | 2025-09-23 08:08:46.409215 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 08:08:46.409231 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-23 08:08:46.409244 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 08:08:46.409256 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 08:08:46.409267 | orchestrator | 2025-09-23 08:08:46.409278 | orchestrator | 2025-09-23 08:08:46.409289 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 08:08:46.409300 | orchestrator | Tuesday 23 September 2025 08:08:45 +0000 (0:00:00.417) 0:00:17.899 ***** 2025-09-23 08:08:46.409311 | orchestrator | =============================================================================== 2025-09-23 08:08:46.409322 | orchestrator | Aggregate test results step one ----------------------------------------- 1.96s 2025-09-23 08:08:46.409333 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.80s 2025-09-23 08:08:46.409344 | orchestrator | Write report file ------------------------------------------------------- 1.38s 2025-09-23 08:08:46.409355 | orchestrator | Gather status data ------------------------------------------------------ 1.36s 2025-09-23 08:08:46.409366 | orchestrator | Get container info ------------------------------------------------------ 0.98s 2025-09-23 08:08:46.409376 | orchestrator | Create report output directory ------------------------------------------ 0.73s 2025-09-23 08:08:46.409387 | orchestrator | Aggregate test results step three --------------------------------------- 0.66s 2025-09-23 08:08:46.409398 | orchestrator | Get timestamp for report file ------------------------------------------- 0.58s 2025-09-23 08:08:46.409409 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.51s 2025-09-23 08:08:46.409426 | orchestrator | Aggregate test results step two ----------------------------------------- 0.50s 2025-09-23 08:08:46.409437 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.49s 2025-09-23 08:08:46.409448 | orchestrator | Print report file information ------------------------------------------- 0.42s 2025-09-23 08:08:46.409459 | orchestrator | Set test result to passed if container is existing ---------------------- 0.40s 2025-09-23 08:08:46.409470 | orchestrator | Set quorum test data ---------------------------------------------------- 0.35s 2025-09-23 08:08:46.409481 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.33s 2025-09-23 08:08:46.409492 | orchestrator | Set health test data ---------------------------------------------------- 0.30s 2025-09-23 08:08:46.409503 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.29s 2025-09-23 08:08:46.409514 | orchestrator | Aggregate test results step three --------------------------------------- 0.28s 2025-09-23 08:08:46.409525 | orchestrator | Prepare test data ------------------------------------------------------- 0.28s 2025-09-23 08:08:46.409536 | orchestrator | Print report file information ------------------------------------------- 0.26s 2025-09-23 08:08:46.731367 | orchestrator | + osism validate ceph-mgrs 2025-09-23 08:09:18.192651 | orchestrator | 2025-09-23 08:09:18.193587 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-09-23 08:09:18.193640 | orchestrator | 2025-09-23 08:09:18.193651 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-09-23 08:09:18.193660 | orchestrator | Tuesday 23 September 2025 08:09:03 +0000 (0:00:00.443) 0:00:00.443 ***** 2025-09-23 08:09:18.193669 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-23 08:09:18.193699 | orchestrator | 2025-09-23 08:09:18.193708 | orchestrator | TASK [Create report output directory] ****************************************** 2025-09-23 08:09:18.193716 | orchestrator | Tuesday 23 September 2025 08:09:03 +0000 (0:00:00.648) 0:00:01.092 ***** 2025-09-23 08:09:18.193724 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-23 08:09:18.193732 | orchestrator | 2025-09-23 08:09:18.193740 | orchestrator | TASK [Define report vars] ****************************************************** 2025-09-23 08:09:18.193749 | orchestrator | Tuesday 23 September 2025 08:09:04 +0000 (0:00:00.879) 0:00:01.971 ***** 2025-09-23 08:09:18.193757 | orchestrator | ok: [testbed-node-0] 2025-09-23 08:09:18.193767 | orchestrator | 2025-09-23 08:09:18.193775 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-09-23 08:09:18.193783 | orchestrator | Tuesday 23 September 2025 08:09:05 +0000 (0:00:00.254) 0:00:02.226 ***** 2025-09-23 08:09:18.193791 | orchestrator | ok: [testbed-node-0] 2025-09-23 08:09:18.193799 | orchestrator | ok: [testbed-node-1] 2025-09-23 08:09:18.193807 | orchestrator | ok: [testbed-node-2] 2025-09-23 08:09:18.193815 | orchestrator | 2025-09-23 08:09:18.193823 | orchestrator | TASK [Get container info] ****************************************************** 2025-09-23 08:09:18.193831 | orchestrator | Tuesday 23 September 2025 08:09:05 +0000 (0:00:00.304) 0:00:02.531 ***** 2025-09-23 08:09:18.193839 | orchestrator | ok: [testbed-node-2] 2025-09-23 08:09:18.193847 | orchestrator | ok: [testbed-node-1] 2025-09-23 08:09:18.193855 | orchestrator | ok: [testbed-node-0] 2025-09-23 08:09:18.193863 | orchestrator | 2025-09-23 08:09:18.193871 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-09-23 08:09:18.193879 | orchestrator | Tuesday 23 September 2025 08:09:06 +0000 (0:00:00.992) 0:00:03.524 ***** 2025-09-23 08:09:18.193887 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:09:18.193896 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:09:18.193904 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:09:18.193911 | orchestrator | 2025-09-23 08:09:18.193919 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-09-23 08:09:18.193927 | orchestrator | Tuesday 23 September 2025 08:09:06 +0000 (0:00:00.290) 0:00:03.814 ***** 2025-09-23 08:09:18.193935 | orchestrator | ok: [testbed-node-0] 2025-09-23 08:09:18.193943 | orchestrator | ok: [testbed-node-1] 2025-09-23 08:09:18.193951 | orchestrator | ok: [testbed-node-2] 2025-09-23 08:09:18.193959 | orchestrator | 2025-09-23 08:09:18.193967 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-23 08:09:18.193975 | orchestrator | Tuesday 23 September 2025 08:09:07 +0000 (0:00:00.521) 0:00:04.335 ***** 2025-09-23 08:09:18.193983 | orchestrator | ok: [testbed-node-0] 2025-09-23 08:09:18.193991 | orchestrator | ok: [testbed-node-1] 2025-09-23 08:09:18.193998 | orchestrator | ok: [testbed-node-2] 2025-09-23 08:09:18.194006 | orchestrator | 2025-09-23 08:09:18.194014 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-09-23 08:09:18.194078 | orchestrator | Tuesday 23 September 2025 08:09:07 +0000 (0:00:00.322) 0:00:04.658 ***** 2025-09-23 08:09:18.194086 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:09:18.194094 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:09:18.194102 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:09:18.194111 | orchestrator | 2025-09-23 08:09:18.194119 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-09-23 08:09:18.194152 | orchestrator | Tuesday 23 September 2025 08:09:07 +0000 (0:00:00.300) 0:00:04.959 ***** 2025-09-23 08:09:18.194161 | orchestrator | ok: [testbed-node-0] 2025-09-23 08:09:18.194169 | orchestrator | ok: [testbed-node-1] 2025-09-23 08:09:18.194177 | orchestrator | ok: [testbed-node-2] 2025-09-23 08:09:18.194185 | orchestrator | 2025-09-23 08:09:18.194193 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-23 08:09:18.194201 | orchestrator | Tuesday 23 September 2025 08:09:08 +0000 (0:00:00.318) 0:00:05.277 ***** 2025-09-23 08:09:18.194209 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:09:18.194257 | orchestrator | 2025-09-23 08:09:18.194267 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-23 08:09:18.194275 | orchestrator | Tuesday 23 September 2025 08:09:08 +0000 (0:00:00.265) 0:00:05.542 ***** 2025-09-23 08:09:18.194283 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:09:18.194291 | orchestrator | 2025-09-23 08:09:18.194299 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-23 08:09:18.194307 | orchestrator | Tuesday 23 September 2025 08:09:08 +0000 (0:00:00.505) 0:00:06.048 ***** 2025-09-23 08:09:18.194315 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:09:18.194323 | orchestrator | 2025-09-23 08:09:18.194331 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-23 08:09:18.194339 | orchestrator | Tuesday 23 September 2025 08:09:09 +0000 (0:00:00.682) 0:00:06.730 ***** 2025-09-23 08:09:18.194347 | orchestrator | 2025-09-23 08:09:18.194355 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-23 08:09:18.194363 | orchestrator | Tuesday 23 September 2025 08:09:09 +0000 (0:00:00.065) 0:00:06.795 ***** 2025-09-23 08:09:18.194371 | orchestrator | 2025-09-23 08:09:18.194379 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-23 08:09:18.194387 | orchestrator | Tuesday 23 September 2025 08:09:09 +0000 (0:00:00.072) 0:00:06.868 ***** 2025-09-23 08:09:18.194395 | orchestrator | 2025-09-23 08:09:18.194403 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-23 08:09:18.194411 | orchestrator | Tuesday 23 September 2025 08:09:09 +0000 (0:00:00.068) 0:00:06.937 ***** 2025-09-23 08:09:18.194419 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:09:18.194427 | orchestrator | 2025-09-23 08:09:18.194435 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-09-23 08:09:18.194443 | orchestrator | Tuesday 23 September 2025 08:09:09 +0000 (0:00:00.237) 0:00:07.174 ***** 2025-09-23 08:09:18.194451 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:09:18.194459 | orchestrator | 2025-09-23 08:09:18.194489 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-09-23 08:09:18.194498 | orchestrator | Tuesday 23 September 2025 08:09:10 +0000 (0:00:00.265) 0:00:07.440 ***** 2025-09-23 08:09:18.194506 | orchestrator | ok: [testbed-node-0] 2025-09-23 08:09:18.194524 | orchestrator | 2025-09-23 08:09:18.194541 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-09-23 08:09:18.194549 | orchestrator | Tuesday 23 September 2025 08:09:10 +0000 (0:00:00.104) 0:00:07.544 ***** 2025-09-23 08:09:18.194557 | orchestrator | changed: [testbed-node-0] 2025-09-23 08:09:18.194602 | orchestrator | 2025-09-23 08:09:18.194611 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-09-23 08:09:18.194619 | orchestrator | Tuesday 23 September 2025 08:09:12 +0000 (0:00:02.053) 0:00:09.597 ***** 2025-09-23 08:09:18.194627 | orchestrator | ok: [testbed-node-0] 2025-09-23 08:09:18.194635 | orchestrator | 2025-09-23 08:09:18.194643 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-09-23 08:09:18.194651 | orchestrator | Tuesday 23 September 2025 08:09:12 +0000 (0:00:00.265) 0:00:09.863 ***** 2025-09-23 08:09:18.194659 | orchestrator | ok: [testbed-node-0] 2025-09-23 08:09:18.194667 | orchestrator | 2025-09-23 08:09:18.194675 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-09-23 08:09:18.194683 | orchestrator | Tuesday 23 September 2025 08:09:12 +0000 (0:00:00.316) 0:00:10.180 ***** 2025-09-23 08:09:18.194714 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:09:18.194724 | orchestrator | 2025-09-23 08:09:18.194732 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-09-23 08:09:18.194740 | orchestrator | Tuesday 23 September 2025 08:09:13 +0000 (0:00:00.144) 0:00:10.325 ***** 2025-09-23 08:09:18.194748 | orchestrator | ok: [testbed-node-0] 2025-09-23 08:09:18.194756 | orchestrator | 2025-09-23 08:09:18.194764 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-09-23 08:09:18.194772 | orchestrator | Tuesday 23 September 2025 08:09:13 +0000 (0:00:00.352) 0:00:10.677 ***** 2025-09-23 08:09:18.194786 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-23 08:09:18.194795 | orchestrator | 2025-09-23 08:09:18.194803 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-09-23 08:09:18.194810 | orchestrator | Tuesday 23 September 2025 08:09:13 +0000 (0:00:00.250) 0:00:10.928 ***** 2025-09-23 08:09:18.194818 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:09:18.194826 | orchestrator | 2025-09-23 08:09:18.194834 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-23 08:09:18.194842 | orchestrator | Tuesday 23 September 2025 08:09:13 +0000 (0:00:00.262) 0:00:11.191 ***** 2025-09-23 08:09:18.194850 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-23 08:09:18.194858 | orchestrator | 2025-09-23 08:09:18.194866 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-23 08:09:18.194874 | orchestrator | Tuesday 23 September 2025 08:09:15 +0000 (0:00:01.240) 0:00:12.431 ***** 2025-09-23 08:09:18.194882 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-23 08:09:18.194890 | orchestrator | 2025-09-23 08:09:18.194898 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-23 08:09:18.194906 | orchestrator | Tuesday 23 September 2025 08:09:15 +0000 (0:00:00.272) 0:00:12.704 ***** 2025-09-23 08:09:18.194914 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-23 08:09:18.194922 | orchestrator | 2025-09-23 08:09:18.194930 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-23 08:09:18.194938 | orchestrator | Tuesday 23 September 2025 08:09:15 +0000 (0:00:00.263) 0:00:12.967 ***** 2025-09-23 08:09:18.194946 | orchestrator | 2025-09-23 08:09:18.194954 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-23 08:09:18.194962 | orchestrator | Tuesday 23 September 2025 08:09:15 +0000 (0:00:00.071) 0:00:13.039 ***** 2025-09-23 08:09:18.194970 | orchestrator | 2025-09-23 08:09:18.194978 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-23 08:09:18.195000 | orchestrator | Tuesday 23 September 2025 08:09:15 +0000 (0:00:00.070) 0:00:13.109 ***** 2025-09-23 08:09:18.195009 | orchestrator | 2025-09-23 08:09:18.195017 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-09-23 08:09:18.195025 | orchestrator | Tuesday 23 September 2025 08:09:15 +0000 (0:00:00.100) 0:00:13.210 ***** 2025-09-23 08:09:18.195033 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-23 08:09:18.195041 | orchestrator | 2025-09-23 08:09:18.195049 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-23 08:09:18.195057 | orchestrator | Tuesday 23 September 2025 08:09:17 +0000 (0:00:01.549) 0:00:14.760 ***** 2025-09-23 08:09:18.195069 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-09-23 08:09:18.195077 | orchestrator |  "msg": [ 2025-09-23 08:09:18.195086 | orchestrator |  "Validator run completed.", 2025-09-23 08:09:18.195094 | orchestrator |  "You can find the report file here:", 2025-09-23 08:09:18.195102 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-09-23T08:09:03+00:00-report.json", 2025-09-23 08:09:18.195111 | orchestrator |  "on the following host:", 2025-09-23 08:09:18.195119 | orchestrator |  "testbed-manager" 2025-09-23 08:09:18.195127 | orchestrator |  ] 2025-09-23 08:09:18.195136 | orchestrator | } 2025-09-23 08:09:18.195144 | orchestrator | 2025-09-23 08:09:18.195152 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 08:09:18.195161 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-23 08:09:18.195171 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 08:09:18.195186 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 08:09:18.505107 | orchestrator | 2025-09-23 08:09:18.505201 | orchestrator | 2025-09-23 08:09:18.505216 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 08:09:18.505283 | orchestrator | Tuesday 23 September 2025 08:09:18 +0000 (0:00:00.621) 0:00:15.381 ***** 2025-09-23 08:09:18.505295 | orchestrator | =============================================================================== 2025-09-23 08:09:18.505306 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.05s 2025-09-23 08:09:18.505317 | orchestrator | Write report file ------------------------------------------------------- 1.55s 2025-09-23 08:09:18.505328 | orchestrator | Aggregate test results step one ----------------------------------------- 1.24s 2025-09-23 08:09:18.505339 | orchestrator | Get container info ------------------------------------------------------ 0.99s 2025-09-23 08:09:18.505350 | orchestrator | Create report output directory ------------------------------------------ 0.88s 2025-09-23 08:09:18.505361 | orchestrator | Aggregate test results step three --------------------------------------- 0.68s 2025-09-23 08:09:18.505371 | orchestrator | Get timestamp for report file ------------------------------------------- 0.65s 2025-09-23 08:09:18.505382 | orchestrator | Print report file information ------------------------------------------- 0.62s 2025-09-23 08:09:18.505393 | orchestrator | Set test result to passed if container is existing ---------------------- 0.52s 2025-09-23 08:09:18.505404 | orchestrator | Aggregate test results step two ----------------------------------------- 0.51s 2025-09-23 08:09:18.505415 | orchestrator | Pass test if required mgr modules are enabled --------------------------- 0.35s 2025-09-23 08:09:18.505426 | orchestrator | Prepare test data ------------------------------------------------------- 0.32s 2025-09-23 08:09:18.505437 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.32s 2025-09-23 08:09:18.505447 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.32s 2025-09-23 08:09:18.505458 | orchestrator | Prepare test data for container existance test -------------------------- 0.30s 2025-09-23 08:09:18.505469 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.30s 2025-09-23 08:09:18.505480 | orchestrator | Set test result to failed if container is missing ----------------------- 0.29s 2025-09-23 08:09:18.505491 | orchestrator | Aggregate test results step two ----------------------------------------- 0.27s 2025-09-23 08:09:18.505502 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.27s 2025-09-23 08:09:18.505513 | orchestrator | Fail due to missing containers ------------------------------------------ 0.27s 2025-09-23 08:09:18.792923 | orchestrator | + osism validate ceph-osds 2025-09-23 08:09:40.106687 | orchestrator | 2025-09-23 08:09:40.106779 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-09-23 08:09:40.106793 | orchestrator | 2025-09-23 08:09:40.106803 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-09-23 08:09:40.106813 | orchestrator | Tuesday 23 September 2025 08:09:35 +0000 (0:00:00.438) 0:00:00.438 ***** 2025-09-23 08:09:40.106823 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-23 08:09:40.106832 | orchestrator | 2025-09-23 08:09:40.106841 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-23 08:09:40.106850 | orchestrator | Tuesday 23 September 2025 08:09:36 +0000 (0:00:00.743) 0:00:01.182 ***** 2025-09-23 08:09:40.106859 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-23 08:09:40.106868 | orchestrator | 2025-09-23 08:09:40.106877 | orchestrator | TASK [Create report output directory] ****************************************** 2025-09-23 08:09:40.106886 | orchestrator | Tuesday 23 September 2025 08:09:36 +0000 (0:00:00.260) 0:00:01.442 ***** 2025-09-23 08:09:40.106895 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-23 08:09:40.106904 | orchestrator | 2025-09-23 08:09:40.106913 | orchestrator | TASK [Define report vars] ****************************************************** 2025-09-23 08:09:40.106922 | orchestrator | Tuesday 23 September 2025 08:09:37 +0000 (0:00:01.016) 0:00:02.459 ***** 2025-09-23 08:09:40.106951 | orchestrator | ok: [testbed-node-3] 2025-09-23 08:09:40.106962 | orchestrator | 2025-09-23 08:09:40.106972 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-09-23 08:09:40.106981 | orchestrator | Tuesday 23 September 2025 08:09:37 +0000 (0:00:00.156) 0:00:02.615 ***** 2025-09-23 08:09:40.106990 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:09:40.106998 | orchestrator | 2025-09-23 08:09:40.107007 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-09-23 08:09:40.107016 | orchestrator | Tuesday 23 September 2025 08:09:37 +0000 (0:00:00.148) 0:00:02.763 ***** 2025-09-23 08:09:40.107039 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:09:40.107048 | orchestrator | skipping: [testbed-node-4] 2025-09-23 08:09:40.107057 | orchestrator | skipping: [testbed-node-5] 2025-09-23 08:09:40.107066 | orchestrator | 2025-09-23 08:09:40.107075 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-09-23 08:09:40.107083 | orchestrator | Tuesday 23 September 2025 08:09:38 +0000 (0:00:00.333) 0:00:03.097 ***** 2025-09-23 08:09:40.107104 | orchestrator | ok: [testbed-node-3] 2025-09-23 08:09:40.107113 | orchestrator | 2025-09-23 08:09:40.107123 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-09-23 08:09:40.107132 | orchestrator | Tuesday 23 September 2025 08:09:38 +0000 (0:00:00.156) 0:00:03.253 ***** 2025-09-23 08:09:40.107141 | orchestrator | ok: [testbed-node-3] 2025-09-23 08:09:40.107149 | orchestrator | ok: [testbed-node-4] 2025-09-23 08:09:40.107158 | orchestrator | ok: [testbed-node-5] 2025-09-23 08:09:40.107167 | orchestrator | 2025-09-23 08:09:40.107176 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-09-23 08:09:40.107185 | orchestrator | Tuesday 23 September 2025 08:09:38 +0000 (0:00:00.354) 0:00:03.608 ***** 2025-09-23 08:09:40.107194 | orchestrator | ok: [testbed-node-3] 2025-09-23 08:09:40.107203 | orchestrator | 2025-09-23 08:09:40.107212 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-23 08:09:40.107220 | orchestrator | Tuesday 23 September 2025 08:09:39 +0000 (0:00:00.639) 0:00:04.248 ***** 2025-09-23 08:09:40.107229 | orchestrator | ok: [testbed-node-3] 2025-09-23 08:09:40.107238 | orchestrator | ok: [testbed-node-4] 2025-09-23 08:09:40.107247 | orchestrator | ok: [testbed-node-5] 2025-09-23 08:09:40.107256 | orchestrator | 2025-09-23 08:09:40.107265 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-09-23 08:09:40.107274 | orchestrator | Tuesday 23 September 2025 08:09:39 +0000 (0:00:00.635) 0:00:04.884 ***** 2025-09-23 08:09:40.107307 | orchestrator | skipping: [testbed-node-3] => (item={'id': '637b02d3eced0871664fd695c823b87427ee42a4df64c22b137d7ee5ff25e6da', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-09-23 08:09:40.107320 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f07d46f92a80d3d998e568d1d22df403be130463a270d6d1e124adbd1d7400e7', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-09-23 08:09:40.107331 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c8b891e13959868b29b7bae3de71a4e2fe2664365c33d15362673aa53272a7b5', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-09-23 08:09:40.107342 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e2c6510503996a6077554d58090e4bf818da3fd2dc90ac0379ec485e704e1744', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-09-23 08:09:40.107351 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7aa3b3e7f5c8a3d75278faebcaa915048f8cd3e31c1c03195b9e0ad188a22108', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-09-23 08:09:40.107381 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7176d1d9ce7cfc7a03dbb8ec6aecb65c28cf3849cb343e98020f5e218de082c8', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-09-23 08:09:40.107396 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3f40a92ff9276a4ca032a519714f05dd4b27505f7dc0702cfa5f041ae7d384bb', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-09-23 08:09:40.107405 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd89ae00405cf5d05ef39aed32d4049fd28de29d7a9e8a4ebe50a5bce106ac61c', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 16 minutes'})  2025-09-23 08:09:40.107414 | orchestrator | skipping: [testbed-node-3] => (item={'id': '64521b94c8d3736d707a870e35082e0a65fe45e9dc3f652c99062ce2aa58b693', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-09-23 08:09:40.107427 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ebfc7eb6f4b9f945f969333c626be0704963fb2a25531d111434f5173bc8ef23', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-09-23 08:09:40.107436 | orchestrator | skipping: [testbed-node-3] => (item={'id': '29c5d223059f1c1f9b76dafde697d928dfbb9af168bb658d9b42810a28f42a86', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 24 minutes'})  2025-09-23 08:09:40.107445 | orchestrator | skipping: [testbed-node-3] => (item={'id': '83fb8b88c0599dcd7bb464c9446f14438e3c9ad1132f223d6fc39922d71c50fd', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 24 minutes'})  2025-09-23 08:09:40.107455 | orchestrator | ok: [testbed-node-3] => (item={'id': '05d4acacabf4de7277466b12ec5ecc0d6f87212b5f364bfb4c7cd1957a59f426', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-09-23 08:09:40.107465 | orchestrator | ok: [testbed-node-3] => (item={'id': '54f82df607a82211555d8744cb3bbe7d2324b5b430cf5539f163e856198e2773', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-09-23 08:09:40.107474 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b1c03c8f7b6851aab79345736d7ff2214e2c3798ff151f854252b239ed8434b3', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 29 minutes'})  2025-09-23 08:09:40.107483 | orchestrator | skipping: [testbed-node-3] => (item={'id': '01ce7852b482ebbaaf4bb645d88b3f750094d3564d7c567b96820f4d7fb90d9f', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-09-23 08:09:40.107492 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ba6d75e10d69e604bbe45ddc08600390c3d1a2af7ccfca83d61308afd9d24477', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-09-23 08:09:40.107501 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9646a3b1327ce554912cd099bdabca3ec59b9f50737dedc0475bc73e5081d741', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2025-09-23 08:09:40.107510 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0b244d5930bbade729e08e1aea5c803325916a098f2607b95e8e6376c51d0d62', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-09-23 08:09:40.107526 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ab8ea69711313dff7b189a4d378dc713354e8261fe689467a5b0365dbeeaaa8b', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-09-23 08:09:40.107535 | orchestrator | skipping: [testbed-node-4] => (item={'id': '52e085f39f941520a2e970665d2a254c93186b16f1402f50c1c36d684fc5ae75', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-09-23 08:09:40.107557 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8132b813ecb23b3754472828940dae1aad5d2f92db695fef2a08206156c1ba60', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-09-23 08:09:40.381247 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ffc35fc52ea5b78caba09923aee49fb3a6ecd72e8a12bcc94a1cec4f5bd82c92', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-09-23 08:09:40.381372 | orchestrator | skipping: [testbed-node-4] => (item={'id': '02338e74cd1efd4f0fd23e640c41ee33ca641b0288acf5a9ecad2c02f2f7cb70', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-09-23 08:09:40.381385 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3877ae9f55bcecdcf1bba9f2b03f77267e0eb57ddc3d953dc814cd157b1215c7', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-09-23 08:09:40.381392 | orchestrator | skipping: [testbed-node-4] => (item={'id': '46897eb3c0e8c76c229aba4c98a67a57a9bccdf111e3ee6824e0173a820cef22', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-09-23 08:09:40.381414 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2ee91a31cc6d5a9bfe3cb70efdbfcd8ce2c7d0fb35b38ed348c9c6bc41ab9e94', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-09-23 08:09:40.381420 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7c8c996faf39b397ceb3ecf3e498e94d35f4bc805659953976b850eea4c4148b', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 16 minutes'})  2025-09-23 08:09:40.381426 | orchestrator | skipping: [testbed-node-4] => (item={'id': '56eb1216612b48905603db36753080e8cede797736ca8324d13edeff5de60f08', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-09-23 08:09:40.381434 | orchestrator | skipping: [testbed-node-4] => (item={'id': '84bca18a51aa13424aed921176e3f5a0d54c2adcaefda6a506fa4bd7f2200095', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-09-23 08:09:40.381440 | orchestrator | skipping: [testbed-node-4] => (item={'id': '84b01ded189454d03659c493cd9382b13f56e0c9e682d0ec4781d7969750968b', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 24 minutes'})  2025-09-23 08:09:40.381446 | orchestrator | skipping: [testbed-node-4] => (item={'id': '05a67e0bb0e0e8e35c07b5f906cbf5b0e91dbb8e5df08ab4bd0ab81544dbe98a', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 24 minutes'})  2025-09-23 08:09:40.381453 | orchestrator | ok: [testbed-node-4] => (item={'id': '72e1400f991f98d884fddb92aef3ada43c38120015e3d9e93db2fa5ff058ebd3', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-09-23 08:09:40.381475 | orchestrator | ok: [testbed-node-4] => (item={'id': '404db9343f0f331dca10f9a2ed4d06e753a8b253fcd103a19c940dd126ad5bf6', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-09-23 08:09:40.381482 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6230c51f75fb92d714d1441f3244bea1ccc24f0fc037aaf987c2a76de5d43703', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 29 minutes'})  2025-09-23 08:09:40.381488 | orchestrator | skipping: [testbed-node-4] => (item={'id': '93e42a89df88f1ace289b0f93d1aa04343ed0834937b94af4c05f5b64fb6a60c', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-09-23 08:09:40.381494 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6651edeb364403ed3130f77b114bf67e120369d3b6f3ff5d8ec8bda69a7786cc', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-09-23 08:09:40.381514 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a356bf48e564bbdc2dd8aa0fc4370a373e942d35e084c0bdbfd54aa379590773', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2025-09-23 08:09:40.381520 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'eff47942f79f23027f6411d341f40d394e2194bf5d4c94e8c116a1606807f98c', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-09-23 08:09:40.381527 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ec546c416b20201fe3797e430ca31f45cee0c46f3aad4941b820fb69aa05f660', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-09-23 08:09:40.381533 | orchestrator | skipping: [testbed-node-5] => (item={'id': '39adfffe59e71af001551eb1765289aaf08067924323b83ee22a8c51ef72016b', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-09-23 08:09:40.381540 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'db9d1e4db37c1198e12e859a21ee96b792d7acd2f12b2b496724016da049485a', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-09-23 08:09:40.381549 | orchestrator | skipping: [testbed-node-5] => (item={'id': '83c751d290f7dd34f9c340d8ff4482d11e4b714026ed662bdd2ff91d622fc5f8', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-09-23 08:09:40.381556 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6a1d9fe7047594b3a4af0c75d1ff0afb2b6a8c5c505cf88b91a7439e7d441b58', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-09-23 08:09:40.381562 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ceb9c9e8e1825dd2fb5cbf66ba039d93fde73e1c02fad053408372ca28f1f1eb', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-09-23 08:09:40.381568 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'be235a9378cca0dfbb0a079bf8191689e627106561cb6f4c1ca0751c6b6ea872', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-09-23 08:09:40.381574 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e7c2bc19be90aba43f14100621358335465136575ffdc7de2df7372712d701af', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-09-23 08:09:40.381585 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b91f54f2337b8df44ea9b8e78ef224a2c937fe9b58526d3f73c5e4a3025bcabc', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 16 minutes'})  2025-09-23 08:09:40.381591 | orchestrator | skipping: [testbed-node-5] => (item={'id': '51f7df674a1a024f760ef9473eed03b5bd60b7775c4ec94d0b7113ab5b677194', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-09-23 08:09:40.381597 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c652361944f44acd198cf23ad3964f877cd29c958e097629f6966654dd16802a', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-09-23 08:09:40.381603 | orchestrator | skipping: [testbed-node-5] => (item={'id': '576a58a808744156d6d313cce5ce8691b90e889ceff1b883d42683d775119f2a', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 24 minutes'})  2025-09-23 08:09:40.381609 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9fe9adee5792a88a107e14bb80471de20c697d2788d97234ef40d798f814cc32', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 24 minutes'})  2025-09-23 08:09:40.381620 | orchestrator | ok: [testbed-node-5] => (item={'id': '5e459d31aa0ebdd39b9e76129960dd6dd3b3875406745233f3e8c37145e8860e', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-09-23 08:09:48.759115 | orchestrator | ok: [testbed-node-5] => (item={'id': 'b372b8e0def12f4efef0209298aba2063aeac0e3528008dbc18b12ce998da472', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-09-23 08:09:48.759246 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7fb0bb3e931e301ff7fc2df0755d763dfa69a9c602ae06da7de72dace071f2de', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 29 minutes'})  2025-09-23 08:09:48.759271 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2b55241bfb1f3d5d7dc5db28076140a4b77c9457ae36a1f1161d5e4024f9a296', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-09-23 08:09:48.759286 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'afb9a5b97aee7c35c5a5e6aa536ede7ce1231cb6bfab105813bf9fecb8ad06b6', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-09-23 08:09:48.759299 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c0b3eba38fc6fb2fdf17b07a36470f96c2ec85a036b223dc7752077996f566b8', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2025-09-23 08:09:48.759389 | orchestrator | skipping: [testbed-node-5] => (item={'id': '11526cb0e611af10c215c57972c9a80c4727d7887bca29766d27490669ac7a98', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-09-23 08:09:48.759411 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e158f43981e1d659bb8611fede45fcbd8626a22d6edcd808eecc76f99c4e8792', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-09-23 08:09:48.759431 | orchestrator | 2025-09-23 08:09:48.759452 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-09-23 08:09:48.759472 | orchestrator | Tuesday 23 September 2025 08:09:40 +0000 (0:00:00.528) 0:00:05.412 ***** 2025-09-23 08:09:48.759492 | orchestrator | ok: [testbed-node-3] 2025-09-23 08:09:48.759545 | orchestrator | ok: [testbed-node-4] 2025-09-23 08:09:48.759564 | orchestrator | ok: [testbed-node-5] 2025-09-23 08:09:48.759583 | orchestrator | 2025-09-23 08:09:48.759602 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-09-23 08:09:48.759625 | orchestrator | Tuesday 23 September 2025 08:09:40 +0000 (0:00:00.356) 0:00:05.769 ***** 2025-09-23 08:09:48.759653 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:09:48.759671 | orchestrator | skipping: [testbed-node-4] 2025-09-23 08:09:48.759690 | orchestrator | skipping: [testbed-node-5] 2025-09-23 08:09:48.759708 | orchestrator | 2025-09-23 08:09:48.759734 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-09-23 08:09:48.759759 | orchestrator | Tuesday 23 September 2025 08:09:40 +0000 (0:00:00.282) 0:00:06.052 ***** 2025-09-23 08:09:48.759779 | orchestrator | ok: [testbed-node-3] 2025-09-23 08:09:48.759799 | orchestrator | ok: [testbed-node-4] 2025-09-23 08:09:48.759818 | orchestrator | ok: [testbed-node-5] 2025-09-23 08:09:48.759837 | orchestrator | 2025-09-23 08:09:48.759860 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-23 08:09:48.759885 | orchestrator | Tuesday 23 September 2025 08:09:41 +0000 (0:00:00.532) 0:00:06.584 ***** 2025-09-23 08:09:48.759914 | orchestrator | ok: [testbed-node-3] 2025-09-23 08:09:48.759935 | orchestrator | ok: [testbed-node-4] 2025-09-23 08:09:48.759954 | orchestrator | ok: [testbed-node-5] 2025-09-23 08:09:48.759973 | orchestrator | 2025-09-23 08:09:48.759993 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-09-23 08:09:48.760013 | orchestrator | Tuesday 23 September 2025 08:09:41 +0000 (0:00:00.287) 0:00:06.871 ***** 2025-09-23 08:09:48.760029 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-09-23 08:09:48.760043 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-09-23 08:09:48.760055 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:09:48.760066 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-09-23 08:09:48.760077 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-09-23 08:09:48.760088 | orchestrator | skipping: [testbed-node-4] 2025-09-23 08:09:48.760099 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-09-23 08:09:48.760110 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-09-23 08:09:48.760121 | orchestrator | skipping: [testbed-node-5] 2025-09-23 08:09:48.760132 | orchestrator | 2025-09-23 08:09:48.760143 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-09-23 08:09:48.760154 | orchestrator | Tuesday 23 September 2025 08:09:42 +0000 (0:00:00.331) 0:00:07.203 ***** 2025-09-23 08:09:48.760165 | orchestrator | ok: [testbed-node-3] 2025-09-23 08:09:48.760176 | orchestrator | ok: [testbed-node-4] 2025-09-23 08:09:48.760187 | orchestrator | ok: [testbed-node-5] 2025-09-23 08:09:48.760198 | orchestrator | 2025-09-23 08:09:48.760230 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-09-23 08:09:48.760241 | orchestrator | Tuesday 23 September 2025 08:09:42 +0000 (0:00:00.303) 0:00:07.507 ***** 2025-09-23 08:09:48.760252 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:09:48.760263 | orchestrator | skipping: [testbed-node-4] 2025-09-23 08:09:48.760274 | orchestrator | skipping: [testbed-node-5] 2025-09-23 08:09:48.760285 | orchestrator | 2025-09-23 08:09:48.760296 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-09-23 08:09:48.760337 | orchestrator | Tuesday 23 September 2025 08:09:42 +0000 (0:00:00.501) 0:00:08.008 ***** 2025-09-23 08:09:48.760351 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:09:48.760362 | orchestrator | skipping: [testbed-node-4] 2025-09-23 08:09:48.760391 | orchestrator | skipping: [testbed-node-5] 2025-09-23 08:09:48.760403 | orchestrator | 2025-09-23 08:09:48.760425 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-09-23 08:09:48.760436 | orchestrator | Tuesday 23 September 2025 08:09:43 +0000 (0:00:00.303) 0:00:08.312 ***** 2025-09-23 08:09:48.760447 | orchestrator | ok: [testbed-node-3] 2025-09-23 08:09:48.760459 | orchestrator | ok: [testbed-node-4] 2025-09-23 08:09:48.760469 | orchestrator | ok: [testbed-node-5] 2025-09-23 08:09:48.760480 | orchestrator | 2025-09-23 08:09:48.760491 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-23 08:09:48.760502 | orchestrator | Tuesday 23 September 2025 08:09:43 +0000 (0:00:00.322) 0:00:08.635 ***** 2025-09-23 08:09:48.760513 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:09:48.760524 | orchestrator | 2025-09-23 08:09:48.760535 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-23 08:09:48.760545 | orchestrator | Tuesday 23 September 2025 08:09:43 +0000 (0:00:00.301) 0:00:08.936 ***** 2025-09-23 08:09:48.760556 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:09:48.760567 | orchestrator | 2025-09-23 08:09:48.760584 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-23 08:09:48.760605 | orchestrator | Tuesday 23 September 2025 08:09:44 +0000 (0:00:00.248) 0:00:09.184 ***** 2025-09-23 08:09:48.760624 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:09:48.760668 | orchestrator | 2025-09-23 08:09:48.760733 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-23 08:09:48.760746 | orchestrator | Tuesday 23 September 2025 08:09:44 +0000 (0:00:00.257) 0:00:09.442 ***** 2025-09-23 08:09:48.760757 | orchestrator | 2025-09-23 08:09:48.760768 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-23 08:09:48.760779 | orchestrator | Tuesday 23 September 2025 08:09:44 +0000 (0:00:00.075) 0:00:09.517 ***** 2025-09-23 08:09:48.760789 | orchestrator | 2025-09-23 08:09:48.760800 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-23 08:09:48.760811 | orchestrator | Tuesday 23 September 2025 08:09:44 +0000 (0:00:00.072) 0:00:09.590 ***** 2025-09-23 08:09:48.760822 | orchestrator | 2025-09-23 08:09:48.760832 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-23 08:09:48.760843 | orchestrator | Tuesday 23 September 2025 08:09:44 +0000 (0:00:00.350) 0:00:09.940 ***** 2025-09-23 08:09:48.760854 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:09:48.760865 | orchestrator | 2025-09-23 08:09:48.760876 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-09-23 08:09:48.760887 | orchestrator | Tuesday 23 September 2025 08:09:45 +0000 (0:00:00.298) 0:00:10.238 ***** 2025-09-23 08:09:48.760898 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:09:48.760908 | orchestrator | 2025-09-23 08:09:48.760920 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-23 08:09:48.760934 | orchestrator | Tuesday 23 September 2025 08:09:45 +0000 (0:00:00.275) 0:00:10.513 ***** 2025-09-23 08:09:48.760953 | orchestrator | ok: [testbed-node-3] 2025-09-23 08:09:48.760971 | orchestrator | ok: [testbed-node-4] 2025-09-23 08:09:48.760989 | orchestrator | ok: [testbed-node-5] 2025-09-23 08:09:48.761008 | orchestrator | 2025-09-23 08:09:48.761026 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-09-23 08:09:48.761044 | orchestrator | Tuesday 23 September 2025 08:09:45 +0000 (0:00:00.333) 0:00:10.847 ***** 2025-09-23 08:09:48.761064 | orchestrator | ok: [testbed-node-3] 2025-09-23 08:09:48.761084 | orchestrator | 2025-09-23 08:09:48.761104 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-09-23 08:09:48.761124 | orchestrator | Tuesday 23 September 2025 08:09:46 +0000 (0:00:00.222) 0:00:11.070 ***** 2025-09-23 08:09:48.761144 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-23 08:09:48.761164 | orchestrator | 2025-09-23 08:09:48.761182 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-09-23 08:09:48.761202 | orchestrator | Tuesday 23 September 2025 08:09:47 +0000 (0:00:01.543) 0:00:12.613 ***** 2025-09-23 08:09:48.761223 | orchestrator | ok: [testbed-node-3] 2025-09-23 08:09:48.761254 | orchestrator | 2025-09-23 08:09:48.761266 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-09-23 08:09:48.761277 | orchestrator | Tuesday 23 September 2025 08:09:47 +0000 (0:00:00.131) 0:00:12.745 ***** 2025-09-23 08:09:48.761289 | orchestrator | ok: [testbed-node-3] 2025-09-23 08:09:48.761300 | orchestrator | 2025-09-23 08:09:48.761335 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-09-23 08:09:48.761347 | orchestrator | Tuesday 23 September 2025 08:09:47 +0000 (0:00:00.289) 0:00:13.035 ***** 2025-09-23 08:09:48.761358 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:09:48.761368 | orchestrator | 2025-09-23 08:09:48.761380 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-09-23 08:09:48.761390 | orchestrator | Tuesday 23 September 2025 08:09:48 +0000 (0:00:00.107) 0:00:13.142 ***** 2025-09-23 08:09:48.761401 | orchestrator | ok: [testbed-node-3] 2025-09-23 08:09:48.761412 | orchestrator | 2025-09-23 08:09:48.761423 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-23 08:09:48.761434 | orchestrator | Tuesday 23 September 2025 08:09:48 +0000 (0:00:00.119) 0:00:13.262 ***** 2025-09-23 08:09:48.761445 | orchestrator | ok: [testbed-node-3] 2025-09-23 08:09:48.761456 | orchestrator | ok: [testbed-node-4] 2025-09-23 08:09:48.761467 | orchestrator | ok: [testbed-node-5] 2025-09-23 08:09:48.761478 | orchestrator | 2025-09-23 08:09:48.761489 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-09-23 08:09:48.761511 | orchestrator | Tuesday 23 September 2025 08:09:48 +0000 (0:00:00.539) 0:00:13.802 ***** 2025-09-23 08:10:01.436144 | orchestrator | changed: [testbed-node-3] 2025-09-23 08:10:01.436256 | orchestrator | changed: [testbed-node-4] 2025-09-23 08:10:01.436272 | orchestrator | changed: [testbed-node-5] 2025-09-23 08:10:01.436289 | orchestrator | 2025-09-23 08:10:01.436310 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-09-23 08:10:01.436329 | orchestrator | Tuesday 23 September 2025 08:09:51 +0000 (0:00:02.378) 0:00:16.180 ***** 2025-09-23 08:10:01.436412 | orchestrator | ok: [testbed-node-3] 2025-09-23 08:10:01.436427 | orchestrator | ok: [testbed-node-4] 2025-09-23 08:10:01.436439 | orchestrator | ok: [testbed-node-5] 2025-09-23 08:10:01.436450 | orchestrator | 2025-09-23 08:10:01.436462 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-09-23 08:10:01.436473 | orchestrator | Tuesday 23 September 2025 08:09:51 +0000 (0:00:00.352) 0:00:16.533 ***** 2025-09-23 08:10:01.436485 | orchestrator | ok: [testbed-node-3] 2025-09-23 08:10:01.436496 | orchestrator | ok: [testbed-node-4] 2025-09-23 08:10:01.436507 | orchestrator | ok: [testbed-node-5] 2025-09-23 08:10:01.436518 | orchestrator | 2025-09-23 08:10:01.436530 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-09-23 08:10:01.436541 | orchestrator | Tuesday 23 September 2025 08:09:51 +0000 (0:00:00.470) 0:00:17.003 ***** 2025-09-23 08:10:01.436552 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:10:01.436563 | orchestrator | skipping: [testbed-node-4] 2025-09-23 08:10:01.436574 | orchestrator | skipping: [testbed-node-5] 2025-09-23 08:10:01.436585 | orchestrator | 2025-09-23 08:10:01.436596 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-09-23 08:10:01.436607 | orchestrator | Tuesday 23 September 2025 08:09:52 +0000 (0:00:00.499) 0:00:17.503 ***** 2025-09-23 08:10:01.436618 | orchestrator | ok: [testbed-node-3] 2025-09-23 08:10:01.436629 | orchestrator | ok: [testbed-node-4] 2025-09-23 08:10:01.436640 | orchestrator | ok: [testbed-node-5] 2025-09-23 08:10:01.436668 | orchestrator | 2025-09-23 08:10:01.436680 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-09-23 08:10:01.436694 | orchestrator | Tuesday 23 September 2025 08:09:52 +0000 (0:00:00.328) 0:00:17.832 ***** 2025-09-23 08:10:01.436707 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:10:01.436720 | orchestrator | skipping: [testbed-node-4] 2025-09-23 08:10:01.436733 | orchestrator | skipping: [testbed-node-5] 2025-09-23 08:10:01.436746 | orchestrator | 2025-09-23 08:10:01.436784 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-09-23 08:10:01.436798 | orchestrator | Tuesday 23 September 2025 08:09:53 +0000 (0:00:00.306) 0:00:18.138 ***** 2025-09-23 08:10:01.436811 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:10:01.436827 | orchestrator | skipping: [testbed-node-4] 2025-09-23 08:10:01.436846 | orchestrator | skipping: [testbed-node-5] 2025-09-23 08:10:01.436869 | orchestrator | 2025-09-23 08:10:01.436894 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-23 08:10:01.436913 | orchestrator | Tuesday 23 September 2025 08:09:53 +0000 (0:00:00.303) 0:00:18.441 ***** 2025-09-23 08:10:01.436931 | orchestrator | ok: [testbed-node-3] 2025-09-23 08:10:01.436949 | orchestrator | ok: [testbed-node-4] 2025-09-23 08:10:01.436967 | orchestrator | ok: [testbed-node-5] 2025-09-23 08:10:01.436986 | orchestrator | 2025-09-23 08:10:01.437007 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-09-23 08:10:01.437025 | orchestrator | Tuesday 23 September 2025 08:09:54 +0000 (0:00:00.744) 0:00:19.186 ***** 2025-09-23 08:10:01.437045 | orchestrator | ok: [testbed-node-3] 2025-09-23 08:10:01.437066 | orchestrator | ok: [testbed-node-4] 2025-09-23 08:10:01.437085 | orchestrator | ok: [testbed-node-5] 2025-09-23 08:10:01.437104 | orchestrator | 2025-09-23 08:10:01.437116 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-09-23 08:10:01.437127 | orchestrator | Tuesday 23 September 2025 08:09:54 +0000 (0:00:00.496) 0:00:19.683 ***** 2025-09-23 08:10:01.437138 | orchestrator | ok: [testbed-node-3] 2025-09-23 08:10:01.437149 | orchestrator | ok: [testbed-node-4] 2025-09-23 08:10:01.437160 | orchestrator | ok: [testbed-node-5] 2025-09-23 08:10:01.437170 | orchestrator | 2025-09-23 08:10:01.437182 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-09-23 08:10:01.437193 | orchestrator | Tuesday 23 September 2025 08:09:54 +0000 (0:00:00.307) 0:00:19.990 ***** 2025-09-23 08:10:01.437204 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:10:01.437215 | orchestrator | skipping: [testbed-node-4] 2025-09-23 08:10:01.437226 | orchestrator | skipping: [testbed-node-5] 2025-09-23 08:10:01.437237 | orchestrator | 2025-09-23 08:10:01.437248 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-09-23 08:10:01.437259 | orchestrator | Tuesday 23 September 2025 08:09:55 +0000 (0:00:00.303) 0:00:20.293 ***** 2025-09-23 08:10:01.437270 | orchestrator | ok: [testbed-node-3] 2025-09-23 08:10:01.437281 | orchestrator | ok: [testbed-node-4] 2025-09-23 08:10:01.437292 | orchestrator | ok: [testbed-node-5] 2025-09-23 08:10:01.437303 | orchestrator | 2025-09-23 08:10:01.437314 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-09-23 08:10:01.437325 | orchestrator | Tuesday 23 September 2025 08:09:55 +0000 (0:00:00.572) 0:00:20.866 ***** 2025-09-23 08:10:01.437336 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-23 08:10:01.437376 | orchestrator | 2025-09-23 08:10:01.437387 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-09-23 08:10:01.437398 | orchestrator | Tuesday 23 September 2025 08:09:56 +0000 (0:00:00.273) 0:00:21.140 ***** 2025-09-23 08:10:01.437409 | orchestrator | skipping: [testbed-node-3] 2025-09-23 08:10:01.437420 | orchestrator | 2025-09-23 08:10:01.437431 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-23 08:10:01.437442 | orchestrator | Tuesday 23 September 2025 08:09:56 +0000 (0:00:00.280) 0:00:21.421 ***** 2025-09-23 08:10:01.437453 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-23 08:10:01.437464 | orchestrator | 2025-09-23 08:10:01.437475 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-23 08:10:01.437486 | orchestrator | Tuesday 23 September 2025 08:09:57 +0000 (0:00:01.604) 0:00:23.025 ***** 2025-09-23 08:10:01.437497 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-23 08:10:01.437508 | orchestrator | 2025-09-23 08:10:01.437520 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-23 08:10:01.437555 | orchestrator | Tuesday 23 September 2025 08:09:58 +0000 (0:00:00.274) 0:00:23.300 ***** 2025-09-23 08:10:01.437599 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-23 08:10:01.437618 | orchestrator | 2025-09-23 08:10:01.437639 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-23 08:10:01.437657 | orchestrator | Tuesday 23 September 2025 08:09:58 +0000 (0:00:00.273) 0:00:23.573 ***** 2025-09-23 08:10:01.437675 | orchestrator | 2025-09-23 08:10:01.437686 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-23 08:10:01.437697 | orchestrator | Tuesday 23 September 2025 08:09:58 +0000 (0:00:00.068) 0:00:23.642 ***** 2025-09-23 08:10:01.437709 | orchestrator | 2025-09-23 08:10:01.437720 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-23 08:10:01.437731 | orchestrator | Tuesday 23 September 2025 08:09:58 +0000 (0:00:00.067) 0:00:23.709 ***** 2025-09-23 08:10:01.437742 | orchestrator | 2025-09-23 08:10:01.437753 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-09-23 08:10:01.437764 | orchestrator | Tuesday 23 September 2025 08:09:58 +0000 (0:00:00.070) 0:00:23.779 ***** 2025-09-23 08:10:01.437775 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-23 08:10:01.437786 | orchestrator | 2025-09-23 08:10:01.437797 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-23 08:10:01.437808 | orchestrator | Tuesday 23 September 2025 08:10:00 +0000 (0:00:01.576) 0:00:25.356 ***** 2025-09-23 08:10:01.437826 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-09-23 08:10:01.437845 | orchestrator |  "msg": [ 2025-09-23 08:10:01.437864 | orchestrator |  "Validator run completed.", 2025-09-23 08:10:01.437882 | orchestrator |  "You can find the report file here:", 2025-09-23 08:10:01.437912 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-09-23T08:09:36+00:00-report.json", 2025-09-23 08:10:01.437932 | orchestrator |  "on the following host:", 2025-09-23 08:10:01.437952 | orchestrator |  "testbed-manager" 2025-09-23 08:10:01.437971 | orchestrator |  ] 2025-09-23 08:10:01.437986 | orchestrator | } 2025-09-23 08:10:01.437998 | orchestrator | 2025-09-23 08:10:01.438009 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 08:10:01.438087 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-09-23 08:10:01.438103 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-23 08:10:01.438114 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-23 08:10:01.438125 | orchestrator | 2025-09-23 08:10:01.438136 | orchestrator | 2025-09-23 08:10:01.438147 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 08:10:01.438158 | orchestrator | Tuesday 23 September 2025 08:10:01 +0000 (0:00:00.827) 0:00:26.184 ***** 2025-09-23 08:10:01.438169 | orchestrator | =============================================================================== 2025-09-23 08:10:01.438180 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.38s 2025-09-23 08:10:01.438191 | orchestrator | Aggregate test results step one ----------------------------------------- 1.60s 2025-09-23 08:10:01.438202 | orchestrator | Write report file ------------------------------------------------------- 1.58s 2025-09-23 08:10:01.438213 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.54s 2025-09-23 08:10:01.438224 | orchestrator | Create report output directory ------------------------------------------ 1.02s 2025-09-23 08:10:01.438235 | orchestrator | Print report file information ------------------------------------------- 0.83s 2025-09-23 08:10:01.438246 | orchestrator | Prepare test data ------------------------------------------------------- 0.75s 2025-09-23 08:10:01.438267 | orchestrator | Get timestamp for report file ------------------------------------------- 0.74s 2025-09-23 08:10:01.438278 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.64s 2025-09-23 08:10:01.438289 | orchestrator | Prepare test data ------------------------------------------------------- 0.64s 2025-09-23 08:10:01.438300 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.57s 2025-09-23 08:10:01.438310 | orchestrator | Prepare test data ------------------------------------------------------- 0.54s 2025-09-23 08:10:01.438321 | orchestrator | Set test result to passed if count matches ------------------------------ 0.53s 2025-09-23 08:10:01.438332 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.53s 2025-09-23 08:10:01.438362 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.50s 2025-09-23 08:10:01.438374 | orchestrator | Fail if count of encrypted OSDs does not match -------------------------- 0.50s 2025-09-23 08:10:01.438384 | orchestrator | Flush handlers ---------------------------------------------------------- 0.50s 2025-09-23 08:10:01.438395 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.50s 2025-09-23 08:10:01.438406 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.47s 2025-09-23 08:10:01.438417 | orchestrator | Get count of ceph-osd containers on host -------------------------------- 0.36s 2025-09-23 08:10:01.752277 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-09-23 08:10:01.761956 | orchestrator | + set -e 2025-09-23 08:10:01.762068 | orchestrator | + source /opt/manager-vars.sh 2025-09-23 08:10:01.762083 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-23 08:10:01.762094 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-23 08:10:01.762105 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-23 08:10:01.762116 | orchestrator | ++ CEPH_VERSION=reef 2025-09-23 08:10:01.762128 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-23 08:10:01.762140 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-23 08:10:01.762151 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-23 08:10:01.762162 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-23 08:10:01.762173 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-23 08:10:01.762184 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-23 08:10:01.762195 | orchestrator | ++ export ARA=false 2025-09-23 08:10:01.762206 | orchestrator | ++ ARA=false 2025-09-23 08:10:01.762216 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-23 08:10:01.762227 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-23 08:10:01.763087 | orchestrator | ++ export TEMPEST=false 2025-09-23 08:10:01.763110 | orchestrator | ++ TEMPEST=false 2025-09-23 08:10:01.763121 | orchestrator | ++ export IS_ZUUL=true 2025-09-23 08:10:01.763132 | orchestrator | ++ IS_ZUUL=true 2025-09-23 08:10:01.763143 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2025-09-23 08:10:01.763154 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2025-09-23 08:10:01.763165 | orchestrator | ++ export EXTERNAL_API=false 2025-09-23 08:10:01.763176 | orchestrator | ++ EXTERNAL_API=false 2025-09-23 08:10:01.763186 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-23 08:10:01.763197 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-23 08:10:01.763208 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-23 08:10:01.763218 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-23 08:10:01.763229 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-23 08:10:01.763240 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-23 08:10:01.763251 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-09-23 08:10:01.763261 | orchestrator | + source /etc/os-release 2025-09-23 08:10:01.763272 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.3 LTS' 2025-09-23 08:10:01.763283 | orchestrator | ++ NAME=Ubuntu 2025-09-23 08:10:01.763294 | orchestrator | ++ VERSION_ID=24.04 2025-09-23 08:10:01.763305 | orchestrator | ++ VERSION='24.04.3 LTS (Noble Numbat)' 2025-09-23 08:10:01.763316 | orchestrator | ++ VERSION_CODENAME=noble 2025-09-23 08:10:01.763327 | orchestrator | ++ ID=ubuntu 2025-09-23 08:10:01.763338 | orchestrator | ++ ID_LIKE=debian 2025-09-23 08:10:01.763382 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-09-23 08:10:01.763393 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-09-23 08:10:01.763404 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-09-23 08:10:01.763416 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-09-23 08:10:01.763428 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-09-23 08:10:01.763439 | orchestrator | ++ LOGO=ubuntu-logo 2025-09-23 08:10:01.763475 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-09-23 08:10:01.763487 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-09-23 08:10:01.763500 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-09-23 08:10:01.804528 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-09-23 08:10:27.054543 | orchestrator | 2025-09-23 08:10:27.054655 | orchestrator | # Status of Elasticsearch 2025-09-23 08:10:27.054671 | orchestrator | 2025-09-23 08:10:27.054684 | orchestrator | + pushd /opt/configuration/contrib 2025-09-23 08:10:27.054696 | orchestrator | + echo 2025-09-23 08:10:27.054707 | orchestrator | + echo '# Status of Elasticsearch' 2025-09-23 08:10:27.054718 | orchestrator | + echo 2025-09-23 08:10:27.054729 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-09-23 08:10:27.258879 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-09-23 08:10:27.258974 | orchestrator | 2025-09-23 08:10:27.258990 | orchestrator | # Status of MariaDB 2025-09-23 08:10:27.259002 | orchestrator | 2025-09-23 08:10:27.259013 | orchestrator | + echo 2025-09-23 08:10:27.259024 | orchestrator | + echo '# Status of MariaDB' 2025-09-23 08:10:27.259035 | orchestrator | + echo 2025-09-23 08:10:27.259046 | orchestrator | + MARIADB_USER=root_shard_0 2025-09-23 08:10:27.259058 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-09-23 08:10:27.341386 | orchestrator | Reading package lists... 2025-09-23 08:10:27.830948 | orchestrator | Building dependency tree... 2025-09-23 08:10:27.831904 | orchestrator | Reading state information... 2025-09-23 08:10:28.319156 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-09-23 08:10:28.319258 | orchestrator | bc set to manually installed. 2025-09-23 08:10:28.319274 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-09-23 08:10:29.020867 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-09-23 08:10:29.021546 | orchestrator | 2025-09-23 08:10:29.021601 | orchestrator | # Status of Prometheus 2025-09-23 08:10:29.021609 | orchestrator | 2025-09-23 08:10:29.021614 | orchestrator | + echo 2025-09-23 08:10:29.021620 | orchestrator | + echo '# Status of Prometheus' 2025-09-23 08:10:29.021625 | orchestrator | + echo 2025-09-23 08:10:29.021630 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-09-23 08:10:29.097545 | orchestrator | Unauthorized 2025-09-23 08:10:29.101975 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-09-23 08:10:29.154965 | orchestrator | Unauthorized 2025-09-23 08:10:29.159794 | orchestrator | 2025-09-23 08:10:29.159847 | orchestrator | # Status of RabbitMQ 2025-09-23 08:10:29.159860 | orchestrator | 2025-09-23 08:10:29.159873 | orchestrator | + echo 2025-09-23 08:10:29.159884 | orchestrator | + echo '# Status of RabbitMQ' 2025-09-23 08:10:29.159896 | orchestrator | + echo 2025-09-23 08:10:29.159908 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-09-23 08:10:29.667384 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-09-23 08:10:29.683600 | orchestrator | 2025-09-23 08:10:29.683695 | orchestrator | # Status of Redis 2025-09-23 08:10:29.683710 | orchestrator | 2025-09-23 08:10:29.683723 | orchestrator | + echo 2025-09-23 08:10:29.683735 | orchestrator | + echo '# Status of Redis' 2025-09-23 08:10:29.683748 | orchestrator | + echo 2025-09-23 08:10:29.683760 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-09-23 08:10:29.690241 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.002214s;;;0.000000;10.000000 2025-09-23 08:10:29.690604 | orchestrator | + popd 2025-09-23 08:10:29.690724 | orchestrator | 2025-09-23 08:10:29.690745 | orchestrator | + echo 2025-09-23 08:10:29.690753 | orchestrator | + echo '# Create backup of MariaDB database' 2025-09-23 08:10:29.690762 | orchestrator | # Create backup of MariaDB database 2025-09-23 08:10:29.690768 | orchestrator | 2025-09-23 08:10:29.690779 | orchestrator | + echo 2025-09-23 08:10:29.690815 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-09-23 08:10:31.844589 | orchestrator | 2025-09-23 08:10:31 | INFO  | Task 583525d3-3a0b-41af-905d-4eb511add032 (mariadb_backup) was prepared for execution. 2025-09-23 08:10:31.844699 | orchestrator | 2025-09-23 08:10:31 | INFO  | It takes a moment until task 583525d3-3a0b-41af-905d-4eb511add032 (mariadb_backup) has been started and output is visible here. 2025-09-23 08:12:17.393228 | orchestrator | 2025-09-23 08:12:17.393320 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-23 08:12:17.393330 | orchestrator | 2025-09-23 08:12:17.393337 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-23 08:12:17.393344 | orchestrator | Tuesday 23 September 2025 08:10:36 +0000 (0:00:00.198) 0:00:00.198 ***** 2025-09-23 08:12:17.393350 | orchestrator | ok: [testbed-node-0] 2025-09-23 08:12:17.393358 | orchestrator | ok: [testbed-node-1] 2025-09-23 08:12:17.393363 | orchestrator | ok: [testbed-node-2] 2025-09-23 08:12:17.393369 | orchestrator | 2025-09-23 08:12:17.393375 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-23 08:12:17.393381 | orchestrator | Tuesday 23 September 2025 08:10:36 +0000 (0:00:00.336) 0:00:00.535 ***** 2025-09-23 08:12:17.393387 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-09-23 08:12:17.393394 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-09-23 08:12:17.393400 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-09-23 08:12:17.393406 | orchestrator | 2025-09-23 08:12:17.393412 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-09-23 08:12:17.393418 | orchestrator | 2025-09-23 08:12:17.393423 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-09-23 08:12:17.393429 | orchestrator | Tuesday 23 September 2025 08:10:37 +0000 (0:00:00.655) 0:00:01.190 ***** 2025-09-23 08:12:17.393435 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-23 08:12:17.393441 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-23 08:12:17.393447 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-23 08:12:17.393453 | orchestrator | 2025-09-23 08:12:17.393459 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-23 08:12:17.393464 | orchestrator | Tuesday 23 September 2025 08:10:37 +0000 (0:00:00.415) 0:00:01.605 ***** 2025-09-23 08:12:17.393471 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-23 08:12:17.393477 | orchestrator | 2025-09-23 08:12:17.393483 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-09-23 08:12:17.393501 | orchestrator | Tuesday 23 September 2025 08:10:38 +0000 (0:00:00.600) 0:00:02.206 ***** 2025-09-23 08:12:17.393508 | orchestrator | ok: [testbed-node-0] 2025-09-23 08:12:17.393514 | orchestrator | ok: [testbed-node-1] 2025-09-23 08:12:17.393519 | orchestrator | ok: [testbed-node-2] 2025-09-23 08:12:17.393525 | orchestrator | 2025-09-23 08:12:17.393531 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-09-23 08:12:17.393537 | orchestrator | Tuesday 23 September 2025 08:10:41 +0000 (0:00:03.248) 0:00:05.454 ***** 2025-09-23 08:12:17.393542 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-09-23 08:12:17.393548 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-09-23 08:12:17.393555 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-23 08:12:17.393561 | orchestrator | mariadb_bootstrap_restart 2025-09-23 08:12:17.393567 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:12:17.393572 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:12:17.393578 | orchestrator | changed: [testbed-node-0] 2025-09-23 08:12:17.393584 | orchestrator | 2025-09-23 08:12:17.393590 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-09-23 08:12:17.393596 | orchestrator | skipping: no hosts matched 2025-09-23 08:12:17.393620 | orchestrator | 2025-09-23 08:12:17.393627 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-23 08:12:17.393633 | orchestrator | skipping: no hosts matched 2025-09-23 08:12:17.393638 | orchestrator | 2025-09-23 08:12:17.393644 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-09-23 08:12:17.393650 | orchestrator | skipping: no hosts matched 2025-09-23 08:12:17.393656 | orchestrator | 2025-09-23 08:12:17.393661 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-09-23 08:12:17.393667 | orchestrator | 2025-09-23 08:12:17.393673 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-09-23 08:12:17.393708 | orchestrator | Tuesday 23 September 2025 08:12:16 +0000 (0:01:35.073) 0:01:40.528 ***** 2025-09-23 08:12:17.393714 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:12:17.393720 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:12:17.393726 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:12:17.393732 | orchestrator | 2025-09-23 08:12:17.393738 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-09-23 08:12:17.393743 | orchestrator | Tuesday 23 September 2025 08:12:16 +0000 (0:00:00.295) 0:01:40.823 ***** 2025-09-23 08:12:17.393749 | orchestrator | skipping: [testbed-node-0] 2025-09-23 08:12:17.393755 | orchestrator | skipping: [testbed-node-1] 2025-09-23 08:12:17.393761 | orchestrator | skipping: [testbed-node-2] 2025-09-23 08:12:17.393767 | orchestrator | 2025-09-23 08:12:17.393773 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 08:12:17.393780 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-23 08:12:17.393788 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-23 08:12:17.393796 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-23 08:12:17.393802 | orchestrator | 2025-09-23 08:12:17.393809 | orchestrator | 2025-09-23 08:12:17.393816 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 08:12:17.393823 | orchestrator | Tuesday 23 September 2025 08:12:17 +0000 (0:00:00.220) 0:01:41.044 ***** 2025-09-23 08:12:17.393829 | orchestrator | =============================================================================== 2025-09-23 08:12:17.393836 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 95.07s 2025-09-23 08:12:17.393856 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.25s 2025-09-23 08:12:17.393863 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.66s 2025-09-23 08:12:17.393870 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.60s 2025-09-23 08:12:17.393876 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.42s 2025-09-23 08:12:17.393883 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2025-09-23 08:12:17.393890 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.30s 2025-09-23 08:12:17.393896 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.22s 2025-09-23 08:12:17.723905 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-09-23 08:12:17.730242 | orchestrator | + set -e 2025-09-23 08:12:17.730308 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-23 08:12:17.730318 | orchestrator | ++ export INTERACTIVE=false 2025-09-23 08:12:17.730327 | orchestrator | ++ INTERACTIVE=false 2025-09-23 08:12:17.730335 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-23 08:12:17.730342 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-23 08:12:17.730350 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-09-23 08:12:17.731732 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-09-23 08:12:17.738464 | orchestrator | 2025-09-23 08:12:17.738537 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-23 08:12:17.738547 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-23 08:12:17.738555 | orchestrator | + export OS_CLOUD=admin 2025-09-23 08:12:17.738563 | orchestrator | + OS_CLOUD=admin 2025-09-23 08:12:17.738570 | orchestrator | + echo 2025-09-23 08:12:17.738952 | orchestrator | # OpenStack endpoints 2025-09-23 08:12:17.738967 | orchestrator | 2025-09-23 08:12:17.738975 | orchestrator | + echo '# OpenStack endpoints' 2025-09-23 08:12:17.738982 | orchestrator | + echo 2025-09-23 08:12:17.738990 | orchestrator | + openstack endpoint list 2025-09-23 08:12:21.785052 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-09-23 08:12:21.785141 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-09-23 08:12:21.785172 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-09-23 08:12:21.785184 | orchestrator | | 0dd852f0f4ea4b77acd1a3c6cf55e20e | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-09-23 08:12:21.785195 | orchestrator | | 21eaa9dfb77b4a6d9eec54cc5cf8810c | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-09-23 08:12:21.785206 | orchestrator | | 27fbcafb1f124660911fdf467d3a31e9 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-09-23 08:12:21.785218 | orchestrator | | 42e5bd1178a042e78110006b5edf6610 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-09-23 08:12:21.785229 | orchestrator | | 5475040c73f54e5384dd8a0720539f9d | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-09-23 08:12:21.785240 | orchestrator | | 56467630dc9c4baabc33abb9649caa40 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-09-23 08:12:21.785251 | orchestrator | | 722f2b6465fa4001ac53c9495d0f8e89 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-09-23 08:12:21.785262 | orchestrator | | 89bc35112afe45beade4570021fefab3 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-09-23 08:12:21.785272 | orchestrator | | 8bd7c36bc8bb4d5082388b5ea01509ee | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-09-23 08:12:21.785283 | orchestrator | | 8cd546e30d03445783b27c648954104a | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-09-23 08:12:21.785296 | orchestrator | | 927446d4ea7c49888f8b37210142944a | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-09-23 08:12:21.785315 | orchestrator | | 9379ea3a95d94ac9a517423340605b9e | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-09-23 08:12:21.785342 | orchestrator | | aa9ac27951b14244984335219569b985 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-09-23 08:12:21.785363 | orchestrator | | b14210b9e1c640d0a1e751baad13eef7 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-09-23 08:12:21.785381 | orchestrator | | b74c6a5367264e07a34d1af00392f1c3 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-09-23 08:12:21.785428 | orchestrator | | cafe927e02694fa9a91178533d306edf | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-09-23 08:12:21.785448 | orchestrator | | cf0c9f5f698a4b078e76b7566e22403b | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-09-23 08:12:21.785466 | orchestrator | | cfcdb4cf9aa2405cac6be54d97eb53ff | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-09-23 08:12:21.785484 | orchestrator | | d3143a6044774fca92ddcfb852d6d302 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-09-23 08:12:21.785504 | orchestrator | | e20f96dede034f0abc0abccc1f7dfafd | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-09-23 08:12:21.785544 | orchestrator | | eac2c93f330741ef89208df827613326 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-09-23 08:12:21.785562 | orchestrator | | fddcdf9dff814e17b4c6bed530367a79 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-09-23 08:12:21.785591 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-09-23 08:12:22.058215 | orchestrator | 2025-09-23 08:12:22.058300 | orchestrator | # Cinder 2025-09-23 08:12:22.058315 | orchestrator | 2025-09-23 08:12:22.058327 | orchestrator | + echo 2025-09-23 08:12:22.058338 | orchestrator | + echo '# Cinder' 2025-09-23 08:12:22.058350 | orchestrator | + echo 2025-09-23 08:12:22.058361 | orchestrator | + openstack volume service list 2025-09-23 08:12:24.764939 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-09-23 08:12:24.765059 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-09-23 08:12:24.765081 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-09-23 08:12:24.765100 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-09-23T08:12:18.000000 | 2025-09-23 08:12:24.765118 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-09-23T08:12:18.000000 | 2025-09-23 08:12:24.765136 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-09-23T08:12:18.000000 | 2025-09-23 08:12:24.765154 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-09-23T08:12:18.000000 | 2025-09-23 08:12:24.765173 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-09-23T08:12:20.000000 | 2025-09-23 08:12:24.765190 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-09-23T08:12:23.000000 | 2025-09-23 08:12:24.765209 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-09-23T08:12:21.000000 | 2025-09-23 08:12:24.765227 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-09-23T08:12:22.000000 | 2025-09-23 08:12:24.765245 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-09-23T08:12:22.000000 | 2025-09-23 08:12:24.765264 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-09-23 08:12:25.016179 | orchestrator | 2025-09-23 08:12:25.016274 | orchestrator | # Neutron 2025-09-23 08:12:25.016289 | orchestrator | 2025-09-23 08:12:25.016301 | orchestrator | + echo 2025-09-23 08:12:25.016314 | orchestrator | + echo '# Neutron' 2025-09-23 08:12:25.016327 | orchestrator | + echo 2025-09-23 08:12:25.016339 | orchestrator | + openstack network agent list 2025-09-23 08:12:27.879155 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-09-23 08:12:27.880039 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-09-23 08:12:27.880062 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-09-23 08:12:27.880072 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-09-23 08:12:27.880080 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-09-23 08:12:27.880088 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-09-23 08:12:27.880096 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-09-23 08:12:27.880104 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-09-23 08:12:27.880112 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-09-23 08:12:27.880120 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-09-23 08:12:27.880128 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-09-23 08:12:27.880136 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-09-23 08:12:27.880144 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-09-23 08:12:28.130695 | orchestrator | + openstack network service provider list 2025-09-23 08:12:31.291326 | orchestrator | +---------------+------+---------+ 2025-09-23 08:12:31.291433 | orchestrator | | Service Type | Name | Default | 2025-09-23 08:12:31.291448 | orchestrator | +---------------+------+---------+ 2025-09-23 08:12:31.291460 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-09-23 08:12:31.291471 | orchestrator | +---------------+------+---------+ 2025-09-23 08:12:31.560302 | orchestrator | 2025-09-23 08:12:31.560394 | orchestrator | # Nova 2025-09-23 08:12:31.560408 | orchestrator | 2025-09-23 08:12:31.560419 | orchestrator | + echo 2025-09-23 08:12:31.560429 | orchestrator | + echo '# Nova' 2025-09-23 08:12:31.560440 | orchestrator | + echo 2025-09-23 08:12:31.560451 | orchestrator | + openstack compute service list 2025-09-23 08:12:34.984077 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-09-23 08:12:34.984179 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-09-23 08:12:34.984193 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-09-23 08:12:34.984205 | orchestrator | | c51ff2fa-3913-4cc6-a665-c7a8ffbf07d4 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-09-23T08:12:31.000000 | 2025-09-23 08:12:34.984216 | orchestrator | | 401ded84-a6af-471b-b837-64724ba53fc2 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-09-23T08:12:27.000000 | 2025-09-23 08:12:34.984227 | orchestrator | | 3a40b4bb-8791-48fd-a52d-84904d2e32f5 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-09-23T08:12:28.000000 | 2025-09-23 08:12:34.984239 | orchestrator | | adcef21d-da76-4942-9efb-b37e8afd4421 | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-09-23T08:12:31.000000 | 2025-09-23 08:12:34.984250 | orchestrator | | 3c2bf8fd-9f53-4da6-8acc-21f2eb23dec9 | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-09-23T08:12:33.000000 | 2025-09-23 08:12:34.984291 | orchestrator | | f2d16aeb-9535-4a93-b373-4534ce47875e | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-09-23T08:12:24.000000 | 2025-09-23 08:12:34.984304 | orchestrator | | 906fcae6-d24d-45ad-969e-c75340bf1c83 | nova-compute | testbed-node-4 | nova | enabled | up | 2025-09-23T08:12:27.000000 | 2025-09-23 08:12:34.984332 | orchestrator | | 6e030592-d0aa-4b15-a398-7c467e1dc0af | nova-compute | testbed-node-3 | nova | enabled | up | 2025-09-23T08:12:27.000000 | 2025-09-23 08:12:34.984343 | orchestrator | | 9c102e4b-dea3-40c2-ac59-4b52474c4b40 | nova-compute | testbed-node-5 | nova | enabled | up | 2025-09-23T08:12:27.000000 | 2025-09-23 08:12:34.984354 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-09-23 08:12:35.241512 | orchestrator | + openstack hypervisor list 2025-09-23 08:12:38.502684 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-09-23 08:12:38.502841 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-09-23 08:12:38.502859 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-09-23 08:12:38.502871 | orchestrator | | 0b054cce-40f2-4f3c-a293-383fe0140af9 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-09-23 08:12:38.502883 | orchestrator | | ee2f026c-9423-4074-a9b7-f4ca91b1df00 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-09-23 08:12:38.502894 | orchestrator | | d34f0c6f-0fd5-4baf-be46-6b25614bbd9d | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-09-23 08:12:38.502905 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-09-23 08:12:38.774257 | orchestrator | 2025-09-23 08:12:38.774379 | orchestrator | # Run OpenStack test play 2025-09-23 08:12:38.774396 | orchestrator | 2025-09-23 08:12:38.774408 | orchestrator | + echo 2025-09-23 08:12:38.774420 | orchestrator | + echo '# Run OpenStack test play' 2025-09-23 08:12:38.774432 | orchestrator | + echo 2025-09-23 08:12:38.774443 | orchestrator | + osism apply --environment openstack test 2025-09-23 08:12:40.701297 | orchestrator | 2025-09-23 08:12:40 | INFO  | Trying to run play test in environment openstack 2025-09-23 08:12:50.833421 | orchestrator | 2025-09-23 08:12:50 | INFO  | Task 7c504cd5-250f-4674-8241-ad253bb2a4d2 (test) was prepared for execution. 2025-09-23 08:12:50.833537 | orchestrator | 2025-09-23 08:12:50 | INFO  | It takes a moment until task 7c504cd5-250f-4674-8241-ad253bb2a4d2 (test) has been started and output is visible here. 2025-09-23 08:19:51.939046 | orchestrator | 2025-09-23 08:19:51.939203 | orchestrator | PLAY [Create test project] ***************************************************** 2025-09-23 08:19:51.939251 | orchestrator | 2025-09-23 08:19:51.939264 | orchestrator | TASK [Create test domain] ****************************************************** 2025-09-23 08:19:51.939277 | orchestrator | Tuesday 23 September 2025 08:12:54 +0000 (0:00:00.078) 0:00:00.078 ***** 2025-09-23 08:19:51.939289 | orchestrator | changed: [localhost] 2025-09-23 08:19:51.939301 | orchestrator | 2025-09-23 08:19:51.939313 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-09-23 08:19:51.939324 | orchestrator | Tuesday 23 September 2025 08:12:58 +0000 (0:00:03.741) 0:00:03.819 ***** 2025-09-23 08:19:51.939335 | orchestrator | changed: [localhost] 2025-09-23 08:19:51.939346 | orchestrator | 2025-09-23 08:19:51.939363 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-09-23 08:19:51.939381 | orchestrator | Tuesday 23 September 2025 08:13:02 +0000 (0:00:04.172) 0:00:07.991 ***** 2025-09-23 08:19:51.939401 | orchestrator | changed: [localhost] 2025-09-23 08:19:51.939419 | orchestrator | 2025-09-23 08:19:51.939438 | orchestrator | TASK [Create test project] ***************************************************** 2025-09-23 08:19:51.939449 | orchestrator | Tuesday 23 September 2025 08:13:09 +0000 (0:00:06.356) 0:00:14.348 ***** 2025-09-23 08:19:51.939553 | orchestrator | changed: [localhost] 2025-09-23 08:19:51.939574 | orchestrator | 2025-09-23 08:19:51.939594 | orchestrator | TASK [Create test user] ******************************************************** 2025-09-23 08:19:51.939613 | orchestrator | Tuesday 23 September 2025 08:13:13 +0000 (0:00:03.999) 0:00:18.348 ***** 2025-09-23 08:19:51.939636 | orchestrator | changed: [localhost] 2025-09-23 08:19:51.939656 | orchestrator | 2025-09-23 08:19:51.939678 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-09-23 08:19:51.939699 | orchestrator | Tuesday 23 September 2025 08:13:17 +0000 (0:00:04.732) 0:00:23.080 ***** 2025-09-23 08:19:51.939715 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-09-23 08:19:51.939747 | orchestrator | changed: [localhost] => (item=member) 2025-09-23 08:19:51.939768 | orchestrator | changed: [localhost] => (item=creator) 2025-09-23 08:19:51.939788 | orchestrator | 2025-09-23 08:19:51.939807 | orchestrator | TASK [Create test server group] ************************************************ 2025-09-23 08:19:51.939826 | orchestrator | Tuesday 23 September 2025 08:13:30 +0000 (0:00:12.625) 0:00:35.706 ***** 2025-09-23 08:19:51.939842 | orchestrator | changed: [localhost] 2025-09-23 08:19:51.939853 | orchestrator | 2025-09-23 08:19:51.939864 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-09-23 08:19:51.939875 | orchestrator | Tuesday 23 September 2025 08:13:35 +0000 (0:00:04.614) 0:00:40.321 ***** 2025-09-23 08:19:51.939886 | orchestrator | changed: [localhost] 2025-09-23 08:19:51.939897 | orchestrator | 2025-09-23 08:19:51.939908 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-09-23 08:19:51.939918 | orchestrator | Tuesday 23 September 2025 08:13:40 +0000 (0:00:05.191) 0:00:45.512 ***** 2025-09-23 08:19:51.939929 | orchestrator | changed: [localhost] 2025-09-23 08:19:51.939940 | orchestrator | 2025-09-23 08:19:51.939951 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-09-23 08:19:51.939962 | orchestrator | Tuesday 23 September 2025 08:13:44 +0000 (0:00:04.457) 0:00:49.969 ***** 2025-09-23 08:19:51.939973 | orchestrator | changed: [localhost] 2025-09-23 08:19:51.939983 | orchestrator | 2025-09-23 08:19:51.939994 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-09-23 08:19:51.940005 | orchestrator | Tuesday 23 September 2025 08:13:48 +0000 (0:00:03.901) 0:00:53.871 ***** 2025-09-23 08:19:51.940016 | orchestrator | changed: [localhost] 2025-09-23 08:19:51.940027 | orchestrator | 2025-09-23 08:19:51.940037 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-09-23 08:19:51.940048 | orchestrator | Tuesday 23 September 2025 08:13:52 +0000 (0:00:04.221) 0:00:58.093 ***** 2025-09-23 08:19:51.940060 | orchestrator | changed: [localhost] 2025-09-23 08:19:51.940071 | orchestrator | 2025-09-23 08:19:51.940081 | orchestrator | TASK [Create test network topology] ******************************************** 2025-09-23 08:19:51.940092 | orchestrator | Tuesday 23 September 2025 08:13:57 +0000 (0:00:04.454) 0:01:02.547 ***** 2025-09-23 08:19:51.940103 | orchestrator | changed: [localhost] 2025-09-23 08:19:51.940114 | orchestrator | 2025-09-23 08:19:51.940125 | orchestrator | TASK [Create test instances] *************************************************** 2025-09-23 08:19:51.940136 | orchestrator | Tuesday 23 September 2025 08:14:13 +0000 (0:00:16.262) 0:01:18.810 ***** 2025-09-23 08:19:51.940147 | orchestrator | changed: [localhost] => (item=test) 2025-09-23 08:19:51.940158 | orchestrator | changed: [localhost] => (item=test-1) 2025-09-23 08:19:51.940169 | orchestrator | 2025-09-23 08:19:51.940180 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-09-23 08:19:51.940190 | orchestrator | 2025-09-23 08:19:51.940201 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-09-23 08:19:51.940212 | orchestrator | changed: [localhost] => (item=test-2) 2025-09-23 08:19:51.940223 | orchestrator | 2025-09-23 08:19:51.940233 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-09-23 08:19:51.940244 | orchestrator | changed: [localhost] => (item=test-3) 2025-09-23 08:19:51.940254 | orchestrator | 2025-09-23 08:19:51.940265 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-09-23 08:19:51.940285 | orchestrator | 2025-09-23 08:19:51.940296 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-09-23 08:19:51.940307 | orchestrator | changed: [localhost] => (item=test-4) 2025-09-23 08:19:51.940317 | orchestrator | 2025-09-23 08:19:51.940328 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-09-23 08:19:51.940339 | orchestrator | Tuesday 23 September 2025 08:18:25 +0000 (0:04:12.025) 0:05:30.835 ***** 2025-09-23 08:19:51.940349 | orchestrator | changed: [localhost] => (item=test) 2025-09-23 08:19:51.940360 | orchestrator | changed: [localhost] => (item=test-1) 2025-09-23 08:19:51.940371 | orchestrator | changed: [localhost] => (item=test-2) 2025-09-23 08:19:51.940382 | orchestrator | changed: [localhost] => (item=test-3) 2025-09-23 08:19:51.940392 | orchestrator | changed: [localhost] => (item=test-4) 2025-09-23 08:19:51.940403 | orchestrator | 2025-09-23 08:19:51.940415 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-09-23 08:19:51.940445 | orchestrator | Tuesday 23 September 2025 08:18:50 +0000 (0:00:24.841) 0:05:55.677 ***** 2025-09-23 08:19:51.940478 | orchestrator | changed: [localhost] => (item=test) 2025-09-23 08:19:51.940490 | orchestrator | changed: [localhost] => (item=test-1) 2025-09-23 08:19:51.940501 | orchestrator | changed: [localhost] => (item=test-2) 2025-09-23 08:19:51.940512 | orchestrator | changed: [localhost] => (item=test-3) 2025-09-23 08:19:51.940523 | orchestrator | changed: [localhost] => (item=test-4) 2025-09-23 08:19:51.940533 | orchestrator | 2025-09-23 08:19:51.940544 | orchestrator | TASK [Create test volume] ****************************************************** 2025-09-23 08:19:51.940555 | orchestrator | Tuesday 23 September 2025 08:19:25 +0000 (0:00:35.028) 0:06:30.706 ***** 2025-09-23 08:19:51.940566 | orchestrator | changed: [localhost] 2025-09-23 08:19:51.940577 | orchestrator | 2025-09-23 08:19:51.940588 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-09-23 08:19:51.940599 | orchestrator | Tuesday 23 September 2025 08:19:32 +0000 (0:00:06.870) 0:06:37.577 ***** 2025-09-23 08:19:51.940609 | orchestrator | changed: [localhost] 2025-09-23 08:19:51.940620 | orchestrator | 2025-09-23 08:19:51.940631 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-09-23 08:19:51.940642 | orchestrator | Tuesday 23 September 2025 08:19:46 +0000 (0:00:13.836) 0:06:51.414 ***** 2025-09-23 08:19:51.940653 | orchestrator | ok: [localhost] 2025-09-23 08:19:51.940664 | orchestrator | 2025-09-23 08:19:51.940679 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-09-23 08:19:51.940690 | orchestrator | Tuesday 23 September 2025 08:19:51 +0000 (0:00:05.455) 0:06:56.869 ***** 2025-09-23 08:19:51.940701 | orchestrator | ok: [localhost] => { 2025-09-23 08:19:51.940712 | orchestrator |  "msg": "192.168.112.129" 2025-09-23 08:19:51.940723 | orchestrator | } 2025-09-23 08:19:51.940734 | orchestrator | 2025-09-23 08:19:51.940745 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-23 08:19:51.940763 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-23 08:19:51.940776 | orchestrator | 2025-09-23 08:19:51.940787 | orchestrator | 2025-09-23 08:19:51.940798 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-23 08:19:51.940809 | orchestrator | Tuesday 23 September 2025 08:19:51 +0000 (0:00:00.047) 0:06:56.917 ***** 2025-09-23 08:19:51.940819 | orchestrator | =============================================================================== 2025-09-23 08:19:51.940830 | orchestrator | Create test instances ------------------------------------------------- 252.03s 2025-09-23 08:19:51.940841 | orchestrator | Add tag to instances --------------------------------------------------- 35.03s 2025-09-23 08:19:51.940852 | orchestrator | Add metadata to instances ---------------------------------------------- 24.84s 2025-09-23 08:19:51.940863 | orchestrator | Create test network topology ------------------------------------------- 16.26s 2025-09-23 08:19:51.940874 | orchestrator | Attach test volume ----------------------------------------------------- 13.84s 2025-09-23 08:19:51.940892 | orchestrator | Add member roles to user test ------------------------------------------ 12.63s 2025-09-23 08:19:51.940903 | orchestrator | Create test volume ------------------------------------------------------ 6.87s 2025-09-23 08:19:51.940914 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.36s 2025-09-23 08:19:51.940925 | orchestrator | Create floating ip address ---------------------------------------------- 5.46s 2025-09-23 08:19:51.940936 | orchestrator | Create ssh security group ----------------------------------------------- 5.19s 2025-09-23 08:19:51.940946 | orchestrator | Create test user -------------------------------------------------------- 4.73s 2025-09-23 08:19:51.940957 | orchestrator | Create test server group ------------------------------------------------ 4.61s 2025-09-23 08:19:51.940968 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.46s 2025-09-23 08:19:51.940979 | orchestrator | Create test keypair ----------------------------------------------------- 4.45s 2025-09-23 08:19:51.940989 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.22s 2025-09-23 08:19:51.941000 | orchestrator | Create test-admin user -------------------------------------------------- 4.17s 2025-09-23 08:19:51.941011 | orchestrator | Create test project ----------------------------------------------------- 4.00s 2025-09-23 08:19:51.941022 | orchestrator | Create icmp security group ---------------------------------------------- 3.90s 2025-09-23 08:19:51.941033 | orchestrator | Create test domain ------------------------------------------------------ 3.74s 2025-09-23 08:19:51.941044 | orchestrator | Print floating ip address ----------------------------------------------- 0.05s 2025-09-23 08:19:52.306609 | orchestrator | + server_list 2025-09-23 08:19:52.306691 | orchestrator | + openstack --os-cloud test server list 2025-09-23 08:19:55.990672 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------------------+----------+ 2025-09-23 08:19:55.990775 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-09-23 08:19:55.990791 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------------------+----------+ 2025-09-23 08:19:55.990803 | orchestrator | | ebf8adb8-da1b-48d6-9bf0-ff8d599f4ba5 | test-4 | ACTIVE | auto_allocated_network=10.42.0.46, 192.168.112.130 | N/A (booted from volume) | SCS-1L-1 | 2025-09-23 08:19:55.990814 | orchestrator | | 0cbe7a91-cee3-4506-abfc-1980efebc2c7 | test-3 | ACTIVE | auto_allocated_network=10.42.0.43, 192.168.112.147 | N/A (booted from volume) | SCS-1L-1 | 2025-09-23 08:19:55.990825 | orchestrator | | 7e6c2c63-3aee-4150-bfdd-c9b61a55d37b | test-2 | ACTIVE | auto_allocated_network=10.42.0.48, 192.168.112.138 | N/A (booted from volume) | SCS-1L-1 | 2025-09-23 08:19:55.990836 | orchestrator | | 7f835501-63b6-45f9-9469-a0d04f94c049 | test-1 | ACTIVE | auto_allocated_network=10.42.0.6, 192.168.112.175 | N/A (booted from volume) | SCS-1L-1 | 2025-09-23 08:19:55.990846 | orchestrator | | beb352be-0526-487b-8f6c-e464af948285 | test | ACTIVE | auto_allocated_network=10.42.0.21, 192.168.112.129 | N/A (booted from volume) | SCS-1L-1 | 2025-09-23 08:19:55.990857 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------------------+----------+ 2025-09-23 08:19:56.362128 | orchestrator | + openstack --os-cloud test server show test 2025-09-23 08:19:59.858840 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-23 08:19:59.858961 | orchestrator | | Field | Value | 2025-09-23 08:19:59.858994 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-23 08:19:59.859006 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-09-23 08:19:59.859016 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-09-23 08:19:59.859027 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-09-23 08:19:59.859037 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-09-23 08:19:59.859047 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-09-23 08:19:59.859057 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-09-23 08:19:59.859083 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-09-23 08:19:59.859095 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-09-23 08:19:59.859112 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-09-23 08:19:59.859122 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-09-23 08:19:59.859133 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-09-23 08:19:59.859143 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-09-23 08:19:59.859153 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-09-23 08:19:59.859163 | orchestrator | | OS-EXT-STS:task_state | None | 2025-09-23 08:19:59.859173 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-09-23 08:19:59.859183 | orchestrator | | OS-SRV-USG:launched_at | 2025-09-23T08:14:57.000000 | 2025-09-23 08:19:59.859206 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-09-23 08:19:59.859228 | orchestrator | | accessIPv4 | | 2025-09-23 08:19:59.859242 | orchestrator | | accessIPv6 | | 2025-09-23 08:19:59.859253 | orchestrator | | addresses | auto_allocated_network=10.42.0.21, 192.168.112.129 | 2025-09-23 08:19:59.859263 | orchestrator | | config_drive | | 2025-09-23 08:19:59.859273 | orchestrator | | created | 2025-09-23T08:14:22Z | 2025-09-23 08:19:59.859283 | orchestrator | | description | None | 2025-09-23 08:19:59.859293 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-09-23 08:19:59.859303 | orchestrator | | hostId | 2ce72446a29f7bee7ef1e1ec359df46d983648223e6c56fdcafd3c4a | 2025-09-23 08:19:59.859313 | orchestrator | | host_status | None | 2025-09-23 08:19:59.859332 | orchestrator | | id | beb352be-0526-487b-8f6c-e464af948285 | 2025-09-23 08:19:59.859350 | orchestrator | | image | N/A (booted from volume) | 2025-09-23 08:19:59.859365 | orchestrator | | key_name | test | 2025-09-23 08:19:59.859377 | orchestrator | | locked | False | 2025-09-23 08:19:59.859389 | orchestrator | | locked_reason | None | 2025-09-23 08:19:59.859401 | orchestrator | | name | test | 2025-09-23 08:19:59.859413 | orchestrator | | pinned_availability_zone | None | 2025-09-23 08:19:59.859425 | orchestrator | | progress | 0 | 2025-09-23 08:19:59.859436 | orchestrator | | project_id | ca948c7aea4c40d88625961af2d5371b | 2025-09-23 08:19:59.859448 | orchestrator | | properties | hostname='test' | 2025-09-23 08:19:59.859502 | orchestrator | | security_groups | name='ssh' | 2025-09-23 08:19:59.859516 | orchestrator | | | name='icmp' | 2025-09-23 08:19:59.859532 | orchestrator | | server_groups | None | 2025-09-23 08:19:59.859544 | orchestrator | | status | ACTIVE | 2025-09-23 08:19:59.859556 | orchestrator | | tags | test | 2025-09-23 08:19:59.859568 | orchestrator | | trusted_image_certificates | None | 2025-09-23 08:19:59.859580 | orchestrator | | updated | 2025-09-23T08:18:30Z | 2025-09-23 08:19:59.859591 | orchestrator | | user_id | 464ca7836c204924a037a5a5f36b942a | 2025-09-23 08:19:59.859604 | orchestrator | | volumes_attached | delete_on_termination='True', id='7f81ce94-31a9-4c71-85de-3ec85bf2bd1a' | 2025-09-23 08:19:59.859622 | orchestrator | | | delete_on_termination='False', id='faff48f6-7a4c-4725-9b51-a6fd8e41bb7f' | 2025-09-23 08:19:59.863961 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-23 08:20:00.208989 | orchestrator | + openstack --os-cloud test server show test-1 2025-09-23 08:20:03.606982 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-23 08:20:03.607095 | orchestrator | | Field | Value | 2025-09-23 08:20:03.607113 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-23 08:20:03.607126 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-09-23 08:20:03.607137 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-09-23 08:20:03.607148 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-09-23 08:20:03.607160 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-09-23 08:20:03.607194 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-09-23 08:20:03.607206 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-09-23 08:20:03.607235 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-09-23 08:20:03.607247 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-09-23 08:20:03.607259 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-09-23 08:20:03.607270 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-09-23 08:20:03.607281 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-09-23 08:20:03.607293 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-09-23 08:20:03.607304 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-09-23 08:20:03.607323 | orchestrator | | OS-EXT-STS:task_state | None | 2025-09-23 08:20:03.607334 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-09-23 08:20:03.607345 | orchestrator | | OS-SRV-USG:launched_at | 2025-09-23T08:15:50.000000 | 2025-09-23 08:20:03.607372 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-09-23 08:20:03.607386 | orchestrator | | accessIPv4 | | 2025-09-23 08:20:03.607402 | orchestrator | | accessIPv6 | | 2025-09-23 08:20:03.607414 | orchestrator | | addresses | auto_allocated_network=10.42.0.6, 192.168.112.175 | 2025-09-23 08:20:03.607425 | orchestrator | | config_drive | | 2025-09-23 08:20:03.607437 | orchestrator | | created | 2025-09-23T08:15:16Z | 2025-09-23 08:20:03.607448 | orchestrator | | description | None | 2025-09-23 08:20:03.607466 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-09-23 08:20:03.607538 | orchestrator | | hostId | 1796655691392f143dab40da701f2a285777b2d7da715111cb07f4f4 | 2025-09-23 08:20:03.607552 | orchestrator | | host_status | None | 2025-09-23 08:20:03.607573 | orchestrator | | id | 7f835501-63b6-45f9-9469-a0d04f94c049 | 2025-09-23 08:20:03.607592 | orchestrator | | image | N/A (booted from volume) | 2025-09-23 08:20:03.607606 | orchestrator | | key_name | test | 2025-09-23 08:20:03.607619 | orchestrator | | locked | False | 2025-09-23 08:20:03.607632 | orchestrator | | locked_reason | None | 2025-09-23 08:20:03.607646 | orchestrator | | name | test-1 | 2025-09-23 08:20:03.607666 | orchestrator | | pinned_availability_zone | None | 2025-09-23 08:20:03.607679 | orchestrator | | progress | 0 | 2025-09-23 08:20:03.607692 | orchestrator | | project_id | ca948c7aea4c40d88625961af2d5371b | 2025-09-23 08:20:03.607705 | orchestrator | | properties | hostname='test-1' | 2025-09-23 08:20:03.607726 | orchestrator | | security_groups | name='ssh' | 2025-09-23 08:20:03.607744 | orchestrator | | | name='icmp' | 2025-09-23 08:20:03.607758 | orchestrator | | server_groups | None | 2025-09-23 08:20:03.607772 | orchestrator | | status | ACTIVE | 2025-09-23 08:20:03.607785 | orchestrator | | tags | test | 2025-09-23 08:20:03.607805 | orchestrator | | trusted_image_certificates | None | 2025-09-23 08:20:03.607818 | orchestrator | | updated | 2025-09-23T08:18:35Z | 2025-09-23 08:20:03.607832 | orchestrator | | user_id | 464ca7836c204924a037a5a5f36b942a | 2025-09-23 08:20:03.607845 | orchestrator | | volumes_attached | delete_on_termination='True', id='b6c20c05-45da-47ec-abd7-fb919c1789f3' | 2025-09-23 08:20:03.613117 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-23 08:20:03.944119 | orchestrator | + openstack --os-cloud test server show test-2 2025-09-23 08:20:07.154965 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-23 08:20:07.155081 | orchestrator | | Field | Value | 2025-09-23 08:20:07.155106 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-23 08:20:07.155125 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-09-23 08:20:07.155196 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-09-23 08:20:07.155214 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-09-23 08:20:07.155229 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-09-23 08:20:07.155245 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-09-23 08:20:07.155260 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-09-23 08:20:07.155294 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-09-23 08:20:07.155311 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-09-23 08:20:07.155332 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-09-23 08:20:07.155348 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-09-23 08:20:07.155392 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-09-23 08:20:07.155422 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-09-23 08:20:07.155438 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-09-23 08:20:07.155453 | orchestrator | | OS-EXT-STS:task_state | None | 2025-09-23 08:20:07.155469 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-09-23 08:20:07.155510 | orchestrator | | OS-SRV-USG:launched_at | 2025-09-23T08:16:42.000000 | 2025-09-23 08:20:07.155536 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-09-23 08:20:07.155552 | orchestrator | | accessIPv4 | | 2025-09-23 08:20:07.155569 | orchestrator | | accessIPv6 | | 2025-09-23 08:20:07.155585 | orchestrator | | addresses | auto_allocated_network=10.42.0.48, 192.168.112.138 | 2025-09-23 08:20:07.155612 | orchestrator | | config_drive | | 2025-09-23 08:20:07.155629 | orchestrator | | created | 2025-09-23T08:16:07Z | 2025-09-23 08:20:07.155645 | orchestrator | | description | None | 2025-09-23 08:20:07.155669 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-09-23 08:20:07.155686 | orchestrator | | hostId | 0cff806dfba58ddb6b040a5a8aed5895fc498cdbfb6c630babe73345 | 2025-09-23 08:20:07.155701 | orchestrator | | host_status | None | 2025-09-23 08:20:07.155725 | orchestrator | | id | 7e6c2c63-3aee-4150-bfdd-c9b61a55d37b | 2025-09-23 08:20:07.155741 | orchestrator | | image | N/A (booted from volume) | 2025-09-23 08:20:07.155762 | orchestrator | | key_name | test | 2025-09-23 08:20:07.155786 | orchestrator | | locked | False | 2025-09-23 08:20:07.155802 | orchestrator | | locked_reason | None | 2025-09-23 08:20:07.155818 | orchestrator | | name | test-2 | 2025-09-23 08:20:07.155834 | orchestrator | | pinned_availability_zone | None | 2025-09-23 08:20:07.155851 | orchestrator | | progress | 0 | 2025-09-23 08:20:07.155867 | orchestrator | | project_id | ca948c7aea4c40d88625961af2d5371b | 2025-09-23 08:20:07.155882 | orchestrator | | properties | hostname='test-2' | 2025-09-23 08:20:07.155905 | orchestrator | | security_groups | name='ssh' | 2025-09-23 08:20:07.155922 | orchestrator | | | name='icmp' | 2025-09-23 08:20:07.155950 | orchestrator | | server_groups | None | 2025-09-23 08:20:07.155966 | orchestrator | | status | ACTIVE | 2025-09-23 08:20:07.155982 | orchestrator | | tags | test | 2025-09-23 08:20:07.155997 | orchestrator | | trusted_image_certificates | None | 2025-09-23 08:20:07.156012 | orchestrator | | updated | 2025-09-23T08:18:40Z | 2025-09-23 08:20:07.156028 | orchestrator | | user_id | 464ca7836c204924a037a5a5f36b942a | 2025-09-23 08:20:07.156043 | orchestrator | | volumes_attached | delete_on_termination='True', id='cc3c0b66-81f7-4aab-b2b7-6509a29b2de8' | 2025-09-23 08:20:07.159656 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-23 08:20:07.391382 | orchestrator | + openstack --os-cloud test server show test-3 2025-09-23 08:20:10.379933 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-23 08:20:10.380052 | orchestrator | | Field | Value | 2025-09-23 08:20:10.380070 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-23 08:20:10.380082 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-09-23 08:20:10.380094 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-09-23 08:20:10.380106 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-09-23 08:20:10.380136 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-09-23 08:20:10.380157 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-09-23 08:20:10.380168 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-09-23 08:20:10.380196 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-09-23 08:20:10.380216 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-09-23 08:20:10.380233 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-09-23 08:20:10.380245 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-09-23 08:20:10.380256 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-09-23 08:20:10.380268 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-09-23 08:20:10.380279 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-09-23 08:20:10.380290 | orchestrator | | OS-EXT-STS:task_state | None | 2025-09-23 08:20:10.380302 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-09-23 08:20:10.380313 | orchestrator | | OS-SRV-USG:launched_at | 2025-09-23T08:17:27.000000 | 2025-09-23 08:20:10.380331 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-09-23 08:20:10.380350 | orchestrator | | accessIPv4 | | 2025-09-23 08:20:10.380373 | orchestrator | | accessIPv6 | | 2025-09-23 08:20:10.380393 | orchestrator | | addresses | auto_allocated_network=10.42.0.43, 192.168.112.147 | 2025-09-23 08:20:10.380411 | orchestrator | | config_drive | | 2025-09-23 08:20:10.380430 | orchestrator | | created | 2025-09-23T08:17:02Z | 2025-09-23 08:20:10.380450 | orchestrator | | description | None | 2025-09-23 08:20:10.380469 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-09-23 08:20:10.380540 | orchestrator | | hostId | 1796655691392f143dab40da701f2a285777b2d7da715111cb07f4f4 | 2025-09-23 08:20:10.380559 | orchestrator | | host_status | None | 2025-09-23 08:20:10.380603 | orchestrator | | id | 0cbe7a91-cee3-4506-abfc-1980efebc2c7 | 2025-09-23 08:20:10.380624 | orchestrator | | image | N/A (booted from volume) | 2025-09-23 08:20:10.380644 | orchestrator | | key_name | test | 2025-09-23 08:20:10.380663 | orchestrator | | locked | False | 2025-09-23 08:20:10.380683 | orchestrator | | locked_reason | None | 2025-09-23 08:20:10.380705 | orchestrator | | name | test-3 | 2025-09-23 08:20:10.380724 | orchestrator | | pinned_availability_zone | None | 2025-09-23 08:20:10.380745 | orchestrator | | progress | 0 | 2025-09-23 08:20:10.380765 | orchestrator | | project_id | ca948c7aea4c40d88625961af2d5371b | 2025-09-23 08:20:10.380797 | orchestrator | | properties | hostname='test-3' | 2025-09-23 08:20:10.380826 | orchestrator | | security_groups | name='ssh' | 2025-09-23 08:20:10.380858 | orchestrator | | | name='icmp' | 2025-09-23 08:20:10.380886 | orchestrator | | server_groups | None | 2025-09-23 08:20:10.380909 | orchestrator | | status | ACTIVE | 2025-09-23 08:20:10.380928 | orchestrator | | tags | test | 2025-09-23 08:20:10.380950 | orchestrator | | trusted_image_certificates | None | 2025-09-23 08:20:10.380970 | orchestrator | | updated | 2025-09-23T08:18:45Z | 2025-09-23 08:20:10.380990 | orchestrator | | user_id | 464ca7836c204924a037a5a5f36b942a | 2025-09-23 08:20:10.381021 | orchestrator | | volumes_attached | delete_on_termination='True', id='edf4e3fb-f054-477f-aaad-aac493327e33' | 2025-09-23 08:20:10.382499 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-23 08:20:10.609369 | orchestrator | + openstack --os-cloud test server show test-4 2025-09-23 08:20:13.585931 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-23 08:20:13.586106 | orchestrator | | Field | Value | 2025-09-23 08:20:13.586143 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-23 08:20:13.586156 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-09-23 08:20:13.586168 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-09-23 08:20:13.586180 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-09-23 08:20:13.586192 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-09-23 08:20:13.586204 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-09-23 08:20:13.586237 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-09-23 08:20:13.586269 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-09-23 08:20:13.586282 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-09-23 08:20:13.586293 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-09-23 08:20:13.586310 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-09-23 08:20:13.586322 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-09-23 08:20:13.586333 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-09-23 08:20:13.586345 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-09-23 08:20:13.586356 | orchestrator | | OS-EXT-STS:task_state | None | 2025-09-23 08:20:13.586375 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-09-23 08:20:13.586386 | orchestrator | | OS-SRV-USG:launched_at | 2025-09-23T08:18:13.000000 | 2025-09-23 08:20:13.586406 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-09-23 08:20:13.586417 | orchestrator | | accessIPv4 | | 2025-09-23 08:20:13.586429 | orchestrator | | accessIPv6 | | 2025-09-23 08:20:13.586445 | orchestrator | | addresses | auto_allocated_network=10.42.0.46, 192.168.112.130 | 2025-09-23 08:20:13.586457 | orchestrator | | config_drive | | 2025-09-23 08:20:13.586469 | orchestrator | | created | 2025-09-23T08:17:47Z | 2025-09-23 08:20:13.586521 | orchestrator | | description | None | 2025-09-23 08:20:13.586542 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-09-23 08:20:13.586555 | orchestrator | | hostId | 0cff806dfba58ddb6b040a5a8aed5895fc498cdbfb6c630babe73345 | 2025-09-23 08:20:13.586568 | orchestrator | | host_status | None | 2025-09-23 08:20:13.586590 | orchestrator | | id | ebf8adb8-da1b-48d6-9bf0-ff8d599f4ba5 | 2025-09-23 08:20:13.586603 | orchestrator | | image | N/A (booted from volume) | 2025-09-23 08:20:13.586622 | orchestrator | | key_name | test | 2025-09-23 08:20:13.586635 | orchestrator | | locked | False | 2025-09-23 08:20:13.586649 | orchestrator | | locked_reason | None | 2025-09-23 08:20:13.586662 | orchestrator | | name | test-4 | 2025-09-23 08:20:13.586683 | orchestrator | | pinned_availability_zone | None | 2025-09-23 08:20:13.586697 | orchestrator | | progress | 0 | 2025-09-23 08:20:13.586710 | orchestrator | | project_id | ca948c7aea4c40d88625961af2d5371b | 2025-09-23 08:20:13.586724 | orchestrator | | properties | hostname='test-4' | 2025-09-23 08:20:13.586745 | orchestrator | | security_groups | name='ssh' | 2025-09-23 08:20:13.586759 | orchestrator | | | name='icmp' | 2025-09-23 08:20:13.586777 | orchestrator | | server_groups | None | 2025-09-23 08:20:13.586790 | orchestrator | | status | ACTIVE | 2025-09-23 08:20:13.586803 | orchestrator | | tags | test | 2025-09-23 08:20:13.586816 | orchestrator | | trusted_image_certificates | None | 2025-09-23 08:20:13.586835 | orchestrator | | updated | 2025-09-23T08:18:50Z | 2025-09-23 08:20:13.586847 | orchestrator | | user_id | 464ca7836c204924a037a5a5f36b942a | 2025-09-23 08:20:13.586858 | orchestrator | | volumes_attached | delete_on_termination='True', id='3257b49c-4993-46cb-a691-bb3ee17be999' | 2025-09-23 08:20:13.592731 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-23 08:20:13.905218 | orchestrator | + server_ping 2025-09-23 08:20:13.906533 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-09-23 08:20:13.906816 | orchestrator | ++ tr -d '\r' 2025-09-23 08:20:17.088628 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-23 08:20:17.088703 | orchestrator | + ping -c3 192.168.112.138 2025-09-23 08:20:17.103522 | orchestrator | PING 192.168.112.138 (192.168.112.138) 56(84) bytes of data. 2025-09-23 08:20:17.103580 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=1 ttl=63 time=6.29 ms 2025-09-23 08:20:18.102157 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=2 ttl=63 time=2.98 ms 2025-09-23 08:20:19.102647 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=3 ttl=63 time=1.98 ms 2025-09-23 08:20:19.102750 | orchestrator | 2025-09-23 08:20:19.102764 | orchestrator | --- 192.168.112.138 ping statistics --- 2025-09-23 08:20:19.102774 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-23 08:20:19.102783 | orchestrator | rtt min/avg/max/mdev = 1.982/3.751/6.287/1.839 ms 2025-09-23 08:20:19.103440 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-23 08:20:19.104230 | orchestrator | + ping -c3 192.168.112.175 2025-09-23 08:20:19.115966 | orchestrator | PING 192.168.112.175 (192.168.112.175) 56(84) bytes of data. 2025-09-23 08:20:19.116020 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=1 ttl=63 time=6.96 ms 2025-09-23 08:20:20.113749 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=2 ttl=63 time=3.03 ms 2025-09-23 08:20:21.113753 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=3 ttl=63 time=1.77 ms 2025-09-23 08:20:21.114190 | orchestrator | 2025-09-23 08:20:21.114223 | orchestrator | --- 192.168.112.175 ping statistics --- 2025-09-23 08:20:21.114256 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-23 08:20:21.114269 | orchestrator | rtt min/avg/max/mdev = 1.774/3.920/6.960/2.209 ms 2025-09-23 08:20:21.115172 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-23 08:20:21.115202 | orchestrator | + ping -c3 192.168.112.129 2025-09-23 08:20:21.126231 | orchestrator | PING 192.168.112.129 (192.168.112.129) 56(84) bytes of data. 2025-09-23 08:20:21.126297 | orchestrator | 64 bytes from 192.168.112.129: icmp_seq=1 ttl=63 time=6.37 ms 2025-09-23 08:20:22.124423 | orchestrator | 64 bytes from 192.168.112.129: icmp_seq=2 ttl=63 time=2.65 ms 2025-09-23 08:20:23.125628 | orchestrator | 64 bytes from 192.168.112.129: icmp_seq=3 ttl=63 time=1.75 ms 2025-09-23 08:20:23.125724 | orchestrator | 2025-09-23 08:20:23.125739 | orchestrator | --- 192.168.112.129 ping statistics --- 2025-09-23 08:20:23.125751 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-23 08:20:23.125762 | orchestrator | rtt min/avg/max/mdev = 1.748/3.590/6.369/1.999 ms 2025-09-23 08:20:23.126103 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-23 08:20:23.126126 | orchestrator | + ping -c3 192.168.112.147 2025-09-23 08:20:23.138194 | orchestrator | PING 192.168.112.147 (192.168.112.147) 56(84) bytes of data. 2025-09-23 08:20:23.138247 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=1 ttl=63 time=7.73 ms 2025-09-23 08:20:24.135096 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=2 ttl=63 time=2.60 ms 2025-09-23 08:20:25.138202 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=3 ttl=63 time=2.07 ms 2025-09-23 08:20:25.138325 | orchestrator | 2025-09-23 08:20:25.138350 | orchestrator | --- 192.168.112.147 ping statistics --- 2025-09-23 08:20:25.138373 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-09-23 08:20:25.138393 | orchestrator | rtt min/avg/max/mdev = 2.069/4.134/7.733/2.554 ms 2025-09-23 08:20:25.138413 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-23 08:20:25.138432 | orchestrator | + ping -c3 192.168.112.130 2025-09-23 08:20:25.150618 | orchestrator | PING 192.168.112.130 (192.168.112.130) 56(84) bytes of data. 2025-09-23 08:20:25.150702 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=1 ttl=63 time=9.07 ms 2025-09-23 08:20:26.145914 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=2 ttl=63 time=2.70 ms 2025-09-23 08:20:27.146809 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=3 ttl=63 time=2.41 ms 2025-09-23 08:20:27.146983 | orchestrator | 2025-09-23 08:20:27.147004 | orchestrator | --- 192.168.112.130 ping statistics --- 2025-09-23 08:20:27.147018 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-23 08:20:27.147030 | orchestrator | rtt min/avg/max/mdev = 2.407/4.722/9.066/3.073 ms 2025-09-23 08:20:27.147392 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-23 08:20:27.147564 | orchestrator | + compute_list 2025-09-23 08:20:27.147585 | orchestrator | + osism manage compute list testbed-node-3 2025-09-23 08:20:30.427930 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-23 08:20:30.428047 | orchestrator | | ID | Name | Status | 2025-09-23 08:20:30.428063 | orchestrator | |--------------------------------------+--------+----------| 2025-09-23 08:20:30.428077 | orchestrator | | ebf8adb8-da1b-48d6-9bf0-ff8d599f4ba5 | test-4 | ACTIVE | 2025-09-23 08:20:30.428090 | orchestrator | | 7e6c2c63-3aee-4150-bfdd-c9b61a55d37b | test-2 | ACTIVE | 2025-09-23 08:20:30.428103 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-23 08:20:30.799917 | orchestrator | + osism manage compute list testbed-node-4 2025-09-23 08:20:34.084441 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-23 08:20:34.084499 | orchestrator | | ID | Name | Status | 2025-09-23 08:20:34.084525 | orchestrator | |--------------------------------------+--------+----------| 2025-09-23 08:20:34.084531 | orchestrator | | beb352be-0526-487b-8f6c-e464af948285 | test | ACTIVE | 2025-09-23 08:20:34.084536 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-23 08:20:34.474594 | orchestrator | + osism manage compute list testbed-node-5 2025-09-23 08:20:37.914555 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-23 08:20:37.914674 | orchestrator | | ID | Name | Status | 2025-09-23 08:20:37.914690 | orchestrator | |--------------------------------------+--------+----------| 2025-09-23 08:20:37.914741 | orchestrator | | 0cbe7a91-cee3-4506-abfc-1980efebc2c7 | test-3 | ACTIVE | 2025-09-23 08:20:37.914754 | orchestrator | | 7f835501-63b6-45f9-9469-a0d04f94c049 | test-1 | ACTIVE | 2025-09-23 08:20:37.914766 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-23 08:20:38.277799 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2025-09-23 08:20:41.324890 | orchestrator | 2025-09-23 08:20:41 | INFO  | Live migrating server beb352be-0526-487b-8f6c-e464af948285 2025-09-23 08:20:54.430133 | orchestrator | 2025-09-23 08:20:54 | INFO  | Live migration of beb352be-0526-487b-8f6c-e464af948285 (test) is still in progress 2025-09-23 08:20:56.795075 | orchestrator | 2025-09-23 08:20:56 | INFO  | Live migration of beb352be-0526-487b-8f6c-e464af948285 (test) is still in progress 2025-09-23 08:20:59.387892 | orchestrator | 2025-09-23 08:20:59 | INFO  | Live migration of beb352be-0526-487b-8f6c-e464af948285 (test) is still in progress 2025-09-23 08:21:02.069315 | orchestrator | 2025-09-23 08:21:02 | INFO  | Live migration of beb352be-0526-487b-8f6c-e464af948285 (test) is still in progress 2025-09-23 08:21:04.336024 | orchestrator | 2025-09-23 08:21:04 | INFO  | Live migration of beb352be-0526-487b-8f6c-e464af948285 (test) is still in progress 2025-09-23 08:21:06.642593 | orchestrator | 2025-09-23 08:21:06 | INFO  | Live migration of beb352be-0526-487b-8f6c-e464af948285 (test) is still in progress 2025-09-23 08:21:09.071740 | orchestrator | 2025-09-23 08:21:09 | INFO  | Live migration of beb352be-0526-487b-8f6c-e464af948285 (test) is still in progress 2025-09-23 08:21:11.439151 | orchestrator | 2025-09-23 08:21:11 | INFO  | Live migration of beb352be-0526-487b-8f6c-e464af948285 (test) is still in progress 2025-09-23 08:21:13.747857 | orchestrator | 2025-09-23 08:21:13 | INFO  | Live migration of beb352be-0526-487b-8f6c-e464af948285 (test) is still in progress 2025-09-23 08:21:15.973982 | orchestrator | 2025-09-23 08:21:15 | INFO  | Live migration of beb352be-0526-487b-8f6c-e464af948285 (test) is still in progress 2025-09-23 08:21:18.316229 | orchestrator | 2025-09-23 08:21:18 | INFO  | Live migration of beb352be-0526-487b-8f6c-e464af948285 (test) completed with status ACTIVE 2025-09-23 08:21:18.754166 | orchestrator | + compute_list 2025-09-23 08:21:18.754263 | orchestrator | + osism manage compute list testbed-node-3 2025-09-23 08:21:22.183666 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-23 08:21:22.183739 | orchestrator | | ID | Name | Status | 2025-09-23 08:21:22.183745 | orchestrator | |--------------------------------------+--------+----------| 2025-09-23 08:21:22.183759 | orchestrator | | ebf8adb8-da1b-48d6-9bf0-ff8d599f4ba5 | test-4 | ACTIVE | 2025-09-23 08:21:22.183764 | orchestrator | | 7e6c2c63-3aee-4150-bfdd-c9b61a55d37b | test-2 | ACTIVE | 2025-09-23 08:21:22.183769 | orchestrator | | beb352be-0526-487b-8f6c-e464af948285 | test | ACTIVE | 2025-09-23 08:21:22.183774 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-23 08:21:22.601586 | orchestrator | + osism manage compute list testbed-node-4 2025-09-23 08:21:25.556097 | orchestrator | +------+--------+----------+ 2025-09-23 08:21:25.556197 | orchestrator | | ID | Name | Status | 2025-09-23 08:21:25.556212 | orchestrator | |------+--------+----------| 2025-09-23 08:21:25.556222 | orchestrator | +------+--------+----------+ 2025-09-23 08:21:25.982980 | orchestrator | + osism manage compute list testbed-node-5 2025-09-23 08:21:29.363487 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-23 08:21:29.363553 | orchestrator | | ID | Name | Status | 2025-09-23 08:21:29.363560 | orchestrator | |--------------------------------------+--------+----------| 2025-09-23 08:21:29.363564 | orchestrator | | 0cbe7a91-cee3-4506-abfc-1980efebc2c7 | test-3 | ACTIVE | 2025-09-23 08:21:29.363592 | orchestrator | | 7f835501-63b6-45f9-9469-a0d04f94c049 | test-1 | ACTIVE | 2025-09-23 08:21:29.363596 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-23 08:21:29.789610 | orchestrator | + server_ping 2025-09-23 08:21:29.790831 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-09-23 08:21:29.790874 | orchestrator | ++ tr -d '\r' 2025-09-23 08:21:32.816472 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-23 08:21:32.816649 | orchestrator | + ping -c3 192.168.112.138 2025-09-23 08:21:32.825818 | orchestrator | PING 192.168.112.138 (192.168.112.138) 56(84) bytes of data. 2025-09-23 08:21:32.825919 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=1 ttl=63 time=6.54 ms 2025-09-23 08:21:33.824277 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=2 ttl=63 time=2.80 ms 2025-09-23 08:21:34.825697 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=3 ttl=63 time=2.13 ms 2025-09-23 08:21:34.825763 | orchestrator | 2025-09-23 08:21:34.825777 | orchestrator | --- 192.168.112.138 ping statistics --- 2025-09-23 08:21:34.825794 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-23 08:21:34.825813 | orchestrator | rtt min/avg/max/mdev = 2.134/3.822/6.536/1.937 ms 2025-09-23 08:21:34.826339 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-23 08:21:34.826381 | orchestrator | + ping -c3 192.168.112.175 2025-09-23 08:21:34.838174 | orchestrator | PING 192.168.112.175 (192.168.112.175) 56(84) bytes of data. 2025-09-23 08:21:34.838224 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=1 ttl=63 time=6.47 ms 2025-09-23 08:21:35.836651 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=2 ttl=63 time=2.69 ms 2025-09-23 08:21:36.836899 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=3 ttl=63 time=2.04 ms 2025-09-23 08:21:36.836988 | orchestrator | 2025-09-23 08:21:36.837011 | orchestrator | --- 192.168.112.175 ping statistics --- 2025-09-23 08:21:36.837033 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-23 08:21:36.837053 | orchestrator | rtt min/avg/max/mdev = 2.036/3.731/6.466/1.951 ms 2025-09-23 08:21:36.837510 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-23 08:21:36.837545 | orchestrator | + ping -c3 192.168.112.129 2025-09-23 08:21:36.848512 | orchestrator | PING 192.168.112.129 (192.168.112.129) 56(84) bytes of data. 2025-09-23 08:21:36.848552 | orchestrator | 64 bytes from 192.168.112.129: icmp_seq=1 ttl=63 time=6.26 ms 2025-09-23 08:21:37.846662 | orchestrator | 64 bytes from 192.168.112.129: icmp_seq=2 ttl=63 time=2.34 ms 2025-09-23 08:21:38.848697 | orchestrator | 64 bytes from 192.168.112.129: icmp_seq=3 ttl=63 time=2.21 ms 2025-09-23 08:21:38.848784 | orchestrator | 2025-09-23 08:21:38.848800 | orchestrator | --- 192.168.112.129 ping statistics --- 2025-09-23 08:21:38.848812 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-23 08:21:38.848824 | orchestrator | rtt min/avg/max/mdev = 2.214/3.607/6.263/1.878 ms 2025-09-23 08:21:38.849174 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-23 08:21:38.849199 | orchestrator | + ping -c3 192.168.112.147 2025-09-23 08:21:38.864238 | orchestrator | PING 192.168.112.147 (192.168.112.147) 56(84) bytes of data. 2025-09-23 08:21:38.864305 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=1 ttl=63 time=9.20 ms 2025-09-23 08:21:39.859515 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=2 ttl=63 time=2.58 ms 2025-09-23 08:21:40.860890 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=3 ttl=63 time=2.06 ms 2025-09-23 08:21:40.860968 | orchestrator | 2025-09-23 08:21:40.860981 | orchestrator | --- 192.168.112.147 ping statistics --- 2025-09-23 08:21:40.860994 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-23 08:21:40.861005 | orchestrator | rtt min/avg/max/mdev = 2.061/4.614/9.201/3.250 ms 2025-09-23 08:21:40.861458 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-23 08:21:40.861480 | orchestrator | + ping -c3 192.168.112.130 2025-09-23 08:21:40.876652 | orchestrator | PING 192.168.112.130 (192.168.112.130) 56(84) bytes of data. 2025-09-23 08:21:40.876733 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=1 ttl=63 time=9.95 ms 2025-09-23 08:21:41.871611 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=2 ttl=63 time=3.08 ms 2025-09-23 08:21:42.872768 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=3 ttl=63 time=2.17 ms 2025-09-23 08:21:42.872890 | orchestrator | 2025-09-23 08:21:42.872914 | orchestrator | --- 192.168.112.130 ping statistics --- 2025-09-23 08:21:42.872931 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-23 08:21:42.872947 | orchestrator | rtt min/avg/max/mdev = 2.167/5.067/9.954/3.475 ms 2025-09-23 08:21:42.873347 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2025-09-23 08:21:46.337202 | orchestrator | 2025-09-23 08:21:46 | INFO  | Live migrating server 0cbe7a91-cee3-4506-abfc-1980efebc2c7 2025-09-23 08:21:58.687717 | orchestrator | 2025-09-23 08:21:58 | INFO  | Live migration of 0cbe7a91-cee3-4506-abfc-1980efebc2c7 (test-3) is still in progress 2025-09-23 08:22:01.042291 | orchestrator | 2025-09-23 08:22:01 | INFO  | Live migration of 0cbe7a91-cee3-4506-abfc-1980efebc2c7 (test-3) is still in progress 2025-09-23 08:22:03.298579 | orchestrator | 2025-09-23 08:22:03 | INFO  | Live migration of 0cbe7a91-cee3-4506-abfc-1980efebc2c7 (test-3) is still in progress 2025-09-23 08:22:05.630959 | orchestrator | 2025-09-23 08:22:05 | INFO  | Live migration of 0cbe7a91-cee3-4506-abfc-1980efebc2c7 (test-3) is still in progress 2025-09-23 08:22:07.982113 | orchestrator | 2025-09-23 08:22:07 | INFO  | Live migration of 0cbe7a91-cee3-4506-abfc-1980efebc2c7 (test-3) is still in progress 2025-09-23 08:22:10.312826 | orchestrator | 2025-09-23 08:22:10 | INFO  | Live migration of 0cbe7a91-cee3-4506-abfc-1980efebc2c7 (test-3) is still in progress 2025-09-23 08:22:12.692981 | orchestrator | 2025-09-23 08:22:12 | INFO  | Live migration of 0cbe7a91-cee3-4506-abfc-1980efebc2c7 (test-3) is still in progress 2025-09-23 08:22:14.947038 | orchestrator | 2025-09-23 08:22:14 | INFO  | Live migration of 0cbe7a91-cee3-4506-abfc-1980efebc2c7 (test-3) is still in progress 2025-09-23 08:22:17.281704 | orchestrator | 2025-09-23 08:22:17 | INFO  | Live migration of 0cbe7a91-cee3-4506-abfc-1980efebc2c7 (test-3) is still in progress 2025-09-23 08:22:19.590983 | orchestrator | 2025-09-23 08:22:19 | INFO  | Live migration of 0cbe7a91-cee3-4506-abfc-1980efebc2c7 (test-3) completed with status ACTIVE 2025-09-23 08:22:19.591056 | orchestrator | 2025-09-23 08:22:19 | INFO  | Live migrating server 7f835501-63b6-45f9-9469-a0d04f94c049 2025-09-23 08:22:31.294306 | orchestrator | 2025-09-23 08:22:31 | INFO  | Live migration of 7f835501-63b6-45f9-9469-a0d04f94c049 (test-1) is still in progress 2025-09-23 08:22:33.619774 | orchestrator | 2025-09-23 08:22:33 | INFO  | Live migration of 7f835501-63b6-45f9-9469-a0d04f94c049 (test-1) is still in progress 2025-09-23 08:22:35.960890 | orchestrator | 2025-09-23 08:22:35 | INFO  | Live migration of 7f835501-63b6-45f9-9469-a0d04f94c049 (test-1) is still in progress 2025-09-23 08:22:38.318394 | orchestrator | 2025-09-23 08:22:38 | INFO  | Live migration of 7f835501-63b6-45f9-9469-a0d04f94c049 (test-1) is still in progress 2025-09-23 08:22:40.670925 | orchestrator | 2025-09-23 08:22:40 | INFO  | Live migration of 7f835501-63b6-45f9-9469-a0d04f94c049 (test-1) is still in progress 2025-09-23 08:22:42.941201 | orchestrator | 2025-09-23 08:22:42 | INFO  | Live migration of 7f835501-63b6-45f9-9469-a0d04f94c049 (test-1) is still in progress 2025-09-23 08:22:45.223198 | orchestrator | 2025-09-23 08:22:45 | INFO  | Live migration of 7f835501-63b6-45f9-9469-a0d04f94c049 (test-1) is still in progress 2025-09-23 08:22:47.545310 | orchestrator | 2025-09-23 08:22:47 | INFO  | Live migration of 7f835501-63b6-45f9-9469-a0d04f94c049 (test-1) is still in progress 2025-09-23 08:22:49.826358 | orchestrator | 2025-09-23 08:22:49 | INFO  | Live migration of 7f835501-63b6-45f9-9469-a0d04f94c049 (test-1) completed with status ACTIVE 2025-09-23 08:22:50.281424 | orchestrator | + compute_list 2025-09-23 08:22:50.281492 | orchestrator | + osism manage compute list testbed-node-3 2025-09-23 08:22:53.734547 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-23 08:22:53.734734 | orchestrator | | ID | Name | Status | 2025-09-23 08:22:53.734765 | orchestrator | |--------------------------------------+--------+----------| 2025-09-23 08:22:53.734787 | orchestrator | | ebf8adb8-da1b-48d6-9bf0-ff8d599f4ba5 | test-4 | ACTIVE | 2025-09-23 08:22:53.734807 | orchestrator | | 0cbe7a91-cee3-4506-abfc-1980efebc2c7 | test-3 | ACTIVE | 2025-09-23 08:22:53.734828 | orchestrator | | 7e6c2c63-3aee-4150-bfdd-c9b61a55d37b | test-2 | ACTIVE | 2025-09-23 08:22:53.734846 | orchestrator | | 7f835501-63b6-45f9-9469-a0d04f94c049 | test-1 | ACTIVE | 2025-09-23 08:22:53.734865 | orchestrator | | beb352be-0526-487b-8f6c-e464af948285 | test | ACTIVE | 2025-09-23 08:22:53.734883 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-23 08:22:54.136241 | orchestrator | + osism manage compute list testbed-node-4 2025-09-23 08:22:57.034321 | orchestrator | +------+--------+----------+ 2025-09-23 08:22:57.034436 | orchestrator | | ID | Name | Status | 2025-09-23 08:22:57.034460 | orchestrator | |------+--------+----------| 2025-09-23 08:22:57.034480 | orchestrator | +------+--------+----------+ 2025-09-23 08:22:57.386561 | orchestrator | + osism manage compute list testbed-node-5 2025-09-23 08:23:00.209051 | orchestrator | +------+--------+----------+ 2025-09-23 08:23:00.209140 | orchestrator | | ID | Name | Status | 2025-09-23 08:23:00.209151 | orchestrator | |------+--------+----------| 2025-09-23 08:23:00.209162 | orchestrator | +------+--------+----------+ 2025-09-23 08:23:00.624589 | orchestrator | + server_ping 2025-09-23 08:23:00.626563 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-09-23 08:23:00.629922 | orchestrator | ++ tr -d '\r' 2025-09-23 08:23:03.606839 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-23 08:23:03.606948 | orchestrator | + ping -c3 192.168.112.138 2025-09-23 08:23:03.616103 | orchestrator | PING 192.168.112.138 (192.168.112.138) 56(84) bytes of data. 2025-09-23 08:23:03.616161 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=1 ttl=63 time=6.98 ms 2025-09-23 08:23:04.613307 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=2 ttl=63 time=2.63 ms 2025-09-23 08:23:05.614580 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=3 ttl=63 time=2.04 ms 2025-09-23 08:23:05.614661 | orchestrator | 2025-09-23 08:23:05.614671 | orchestrator | --- 192.168.112.138 ping statistics --- 2025-09-23 08:23:05.614747 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-23 08:23:05.614756 | orchestrator | rtt min/avg/max/mdev = 2.036/3.879/6.975/2.202 ms 2025-09-23 08:23:05.615375 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-23 08:23:05.615390 | orchestrator | + ping -c3 192.168.112.175 2025-09-23 08:23:05.625244 | orchestrator | PING 192.168.112.175 (192.168.112.175) 56(84) bytes of data. 2025-09-23 08:23:05.625297 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=1 ttl=63 time=6.31 ms 2025-09-23 08:23:06.623201 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=2 ttl=63 time=2.66 ms 2025-09-23 08:23:07.625662 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=3 ttl=63 time=2.17 ms 2025-09-23 08:23:07.625830 | orchestrator | 2025-09-23 08:23:07.625852 | orchestrator | --- 192.168.112.175 ping statistics --- 2025-09-23 08:23:07.625867 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-23 08:23:07.625880 | orchestrator | rtt min/avg/max/mdev = 2.166/3.713/6.312/1.848 ms 2025-09-23 08:23:07.625893 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-23 08:23:07.625907 | orchestrator | + ping -c3 192.168.112.129 2025-09-23 08:23:07.638673 | orchestrator | PING 192.168.112.129 (192.168.112.129) 56(84) bytes of data. 2025-09-23 08:23:07.638788 | orchestrator | 64 bytes from 192.168.112.129: icmp_seq=1 ttl=63 time=7.57 ms 2025-09-23 08:23:08.636090 | orchestrator | 64 bytes from 192.168.112.129: icmp_seq=2 ttl=63 time=2.88 ms 2025-09-23 08:23:09.637007 | orchestrator | 64 bytes from 192.168.112.129: icmp_seq=3 ttl=63 time=1.82 ms 2025-09-23 08:23:09.637104 | orchestrator | 2025-09-23 08:23:09.637120 | orchestrator | --- 192.168.112.129 ping statistics --- 2025-09-23 08:23:09.637133 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-23 08:23:09.637170 | orchestrator | rtt min/avg/max/mdev = 1.823/4.089/7.569/2.497 ms 2025-09-23 08:23:09.637195 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-23 08:23:09.637208 | orchestrator | + ping -c3 192.168.112.147 2025-09-23 08:23:09.648453 | orchestrator | PING 192.168.112.147 (192.168.112.147) 56(84) bytes of data. 2025-09-23 08:23:09.648513 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=1 ttl=63 time=6.24 ms 2025-09-23 08:23:10.646525 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=2 ttl=63 time=2.61 ms 2025-09-23 08:23:11.646745 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=3 ttl=63 time=2.01 ms 2025-09-23 08:23:11.646846 | orchestrator | 2025-09-23 08:23:11.646862 | orchestrator | --- 192.168.112.147 ping statistics --- 2025-09-23 08:23:11.646875 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-09-23 08:23:11.646886 | orchestrator | rtt min/avg/max/mdev = 2.014/3.621/6.241/1.868 ms 2025-09-23 08:23:11.647201 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-23 08:23:11.647225 | orchestrator | + ping -c3 192.168.112.130 2025-09-23 08:23:11.661919 | orchestrator | PING 192.168.112.130 (192.168.112.130) 56(84) bytes of data. 2025-09-23 08:23:11.662075 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=1 ttl=63 time=9.19 ms 2025-09-23 08:23:12.656813 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=2 ttl=63 time=2.16 ms 2025-09-23 08:23:13.658631 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=3 ttl=63 time=1.71 ms 2025-09-23 08:23:13.658767 | orchestrator | 2025-09-23 08:23:13.658785 | orchestrator | --- 192.168.112.130 ping statistics --- 2025-09-23 08:23:13.658798 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-23 08:23:13.658810 | orchestrator | rtt min/avg/max/mdev = 1.705/4.348/9.186/3.425 ms 2025-09-23 08:23:13.658821 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2025-09-23 08:23:17.038971 | orchestrator | 2025-09-23 08:23:17 | INFO  | Live migrating server ebf8adb8-da1b-48d6-9bf0-ff8d599f4ba5 2025-09-23 08:23:30.090993 | orchestrator | 2025-09-23 08:23:30 | INFO  | Live migration of ebf8adb8-da1b-48d6-9bf0-ff8d599f4ba5 (test-4) is still in progress 2025-09-23 08:23:32.466495 | orchestrator | 2025-09-23 08:23:32 | INFO  | Live migration of ebf8adb8-da1b-48d6-9bf0-ff8d599f4ba5 (test-4) is still in progress 2025-09-23 08:23:34.780050 | orchestrator | 2025-09-23 08:23:34 | INFO  | Live migration of ebf8adb8-da1b-48d6-9bf0-ff8d599f4ba5 (test-4) is still in progress 2025-09-23 08:23:37.120370 | orchestrator | 2025-09-23 08:23:37 | INFO  | Live migration of ebf8adb8-da1b-48d6-9bf0-ff8d599f4ba5 (test-4) is still in progress 2025-09-23 08:23:39.494308 | orchestrator | 2025-09-23 08:23:39 | INFO  | Live migration of ebf8adb8-da1b-48d6-9bf0-ff8d599f4ba5 (test-4) is still in progress 2025-09-23 08:23:41.820879 | orchestrator | 2025-09-23 08:23:41 | INFO  | Live migration of ebf8adb8-da1b-48d6-9bf0-ff8d599f4ba5 (test-4) is still in progress 2025-09-23 08:23:44.112973 | orchestrator | 2025-09-23 08:23:44 | INFO  | Live migration of ebf8adb8-da1b-48d6-9bf0-ff8d599f4ba5 (test-4) is still in progress 2025-09-23 08:23:46.397586 | orchestrator | 2025-09-23 08:23:46 | INFO  | Live migration of ebf8adb8-da1b-48d6-9bf0-ff8d599f4ba5 (test-4) is still in progress 2025-09-23 08:23:48.686196 | orchestrator | 2025-09-23 08:23:48 | INFO  | Live migration of ebf8adb8-da1b-48d6-9bf0-ff8d599f4ba5 (test-4) completed with status ACTIVE 2025-09-23 08:23:48.686293 | orchestrator | 2025-09-23 08:23:48 | INFO  | Live migrating server 0cbe7a91-cee3-4506-abfc-1980efebc2c7 2025-09-23 08:24:02.071716 | orchestrator | 2025-09-23 08:24:02 | INFO  | Live migration of 0cbe7a91-cee3-4506-abfc-1980efebc2c7 (test-3) is still in progress 2025-09-23 08:24:04.406309 | orchestrator | 2025-09-23 08:24:04 | INFO  | Live migration of 0cbe7a91-cee3-4506-abfc-1980efebc2c7 (test-3) is still in progress 2025-09-23 08:24:06.701183 | orchestrator | 2025-09-23 08:24:06 | INFO  | Live migration of 0cbe7a91-cee3-4506-abfc-1980efebc2c7 (test-3) is still in progress 2025-09-23 08:24:09.085062 | orchestrator | 2025-09-23 08:24:09 | INFO  | Live migration of 0cbe7a91-cee3-4506-abfc-1980efebc2c7 (test-3) is still in progress 2025-09-23 08:24:11.465916 | orchestrator | 2025-09-23 08:24:11 | INFO  | Live migration of 0cbe7a91-cee3-4506-abfc-1980efebc2c7 (test-3) is still in progress 2025-09-23 08:24:13.758293 | orchestrator | 2025-09-23 08:24:13 | INFO  | Live migration of 0cbe7a91-cee3-4506-abfc-1980efebc2c7 (test-3) is still in progress 2025-09-23 08:24:16.081131 | orchestrator | 2025-09-23 08:24:16 | INFO  | Live migration of 0cbe7a91-cee3-4506-abfc-1980efebc2c7 (test-3) is still in progress 2025-09-23 08:24:18.418253 | orchestrator | 2025-09-23 08:24:18 | INFO  | Live migration of 0cbe7a91-cee3-4506-abfc-1980efebc2c7 (test-3) is still in progress 2025-09-23 08:24:20.768044 | orchestrator | 2025-09-23 08:24:20 | INFO  | Live migration of 0cbe7a91-cee3-4506-abfc-1980efebc2c7 (test-3) is still in progress 2025-09-23 08:24:23.133268 | orchestrator | 2025-09-23 08:24:23 | INFO  | Live migration of 0cbe7a91-cee3-4506-abfc-1980efebc2c7 (test-3) is still in progress 2025-09-23 08:24:25.429111 | orchestrator | 2025-09-23 08:24:25 | INFO  | Live migration of 0cbe7a91-cee3-4506-abfc-1980efebc2c7 (test-3) completed with status ACTIVE 2025-09-23 08:24:25.429219 | orchestrator | 2025-09-23 08:24:25 | INFO  | Live migrating server 7e6c2c63-3aee-4150-bfdd-c9b61a55d37b 2025-09-23 08:24:36.134417 | orchestrator | 2025-09-23 08:24:36 | INFO  | Live migration of 7e6c2c63-3aee-4150-bfdd-c9b61a55d37b (test-2) is still in progress 2025-09-23 08:24:38.453035 | orchestrator | 2025-09-23 08:24:38 | INFO  | Live migration of 7e6c2c63-3aee-4150-bfdd-c9b61a55d37b (test-2) is still in progress 2025-09-23 08:24:40.847628 | orchestrator | 2025-09-23 08:24:40 | INFO  | Live migration of 7e6c2c63-3aee-4150-bfdd-c9b61a55d37b (test-2) is still in progress 2025-09-23 08:24:43.222874 | orchestrator | 2025-09-23 08:24:43 | INFO  | Live migration of 7e6c2c63-3aee-4150-bfdd-c9b61a55d37b (test-2) is still in progress 2025-09-23 08:24:45.566565 | orchestrator | 2025-09-23 08:24:45 | INFO  | Live migration of 7e6c2c63-3aee-4150-bfdd-c9b61a55d37b (test-2) is still in progress 2025-09-23 08:24:47.873124 | orchestrator | 2025-09-23 08:24:47 | INFO  | Live migration of 7e6c2c63-3aee-4150-bfdd-c9b61a55d37b (test-2) is still in progress 2025-09-23 08:24:50.134757 | orchestrator | 2025-09-23 08:24:50 | INFO  | Live migration of 7e6c2c63-3aee-4150-bfdd-c9b61a55d37b (test-2) is still in progress 2025-09-23 08:24:52.558110 | orchestrator | 2025-09-23 08:24:52 | INFO  | Live migration of 7e6c2c63-3aee-4150-bfdd-c9b61a55d37b (test-2) is still in progress 2025-09-23 08:24:54.827104 | orchestrator | 2025-09-23 08:24:54 | INFO  | Live migration of 7e6c2c63-3aee-4150-bfdd-c9b61a55d37b (test-2) completed with status ACTIVE 2025-09-23 08:24:54.827170 | orchestrator | 2025-09-23 08:24:54 | INFO  | Live migrating server 7f835501-63b6-45f9-9469-a0d04f94c049 2025-09-23 08:25:04.175837 | orchestrator | 2025-09-23 08:25:04 | INFO  | Live migration of 7f835501-63b6-45f9-9469-a0d04f94c049 (test-1) is still in progress 2025-09-23 08:25:06.511817 | orchestrator | 2025-09-23 08:25:06 | INFO  | Live migration of 7f835501-63b6-45f9-9469-a0d04f94c049 (test-1) is still in progress 2025-09-23 08:25:08.836206 | orchestrator | 2025-09-23 08:25:08 | INFO  | Live migration of 7f835501-63b6-45f9-9469-a0d04f94c049 (test-1) is still in progress 2025-09-23 08:25:11.107147 | orchestrator | 2025-09-23 08:25:11 | INFO  | Live migration of 7f835501-63b6-45f9-9469-a0d04f94c049 (test-1) is still in progress 2025-09-23 08:25:13.457688 | orchestrator | 2025-09-23 08:25:13 | INFO  | Live migration of 7f835501-63b6-45f9-9469-a0d04f94c049 (test-1) is still in progress 2025-09-23 08:25:15.726458 | orchestrator | 2025-09-23 08:25:15 | INFO  | Live migration of 7f835501-63b6-45f9-9469-a0d04f94c049 (test-1) is still in progress 2025-09-23 08:25:18.222702 | orchestrator | 2025-09-23 08:25:18 | INFO  | Live migration of 7f835501-63b6-45f9-9469-a0d04f94c049 (test-1) is still in progress 2025-09-23 08:25:20.476033 | orchestrator | 2025-09-23 08:25:20 | INFO  | Live migration of 7f835501-63b6-45f9-9469-a0d04f94c049 (test-1) is still in progress 2025-09-23 08:25:22.759231 | orchestrator | 2025-09-23 08:25:22 | INFO  | Live migration of 7f835501-63b6-45f9-9469-a0d04f94c049 (test-1) is still in progress 2025-09-23 08:25:25.090106 | orchestrator | 2025-09-23 08:25:25 | INFO  | Live migration of 7f835501-63b6-45f9-9469-a0d04f94c049 (test-1) completed with status ACTIVE 2025-09-23 08:25:25.090203 | orchestrator | 2025-09-23 08:25:25 | INFO  | Live migrating server beb352be-0526-487b-8f6c-e464af948285 2025-09-23 08:25:35.505268 | orchestrator | 2025-09-23 08:25:35 | INFO  | Live migration of beb352be-0526-487b-8f6c-e464af948285 (test) is still in progress 2025-09-23 08:25:37.986373 | orchestrator | 2025-09-23 08:25:37 | INFO  | Live migration of beb352be-0526-487b-8f6c-e464af948285 (test) is still in progress 2025-09-23 08:25:40.342125 | orchestrator | 2025-09-23 08:25:40 | INFO  | Live migration of beb352be-0526-487b-8f6c-e464af948285 (test) is still in progress 2025-09-23 08:25:42.861217 | orchestrator | 2025-09-23 08:25:42 | INFO  | Live migration of beb352be-0526-487b-8f6c-e464af948285 (test) is still in progress 2025-09-23 08:25:45.227034 | orchestrator | 2025-09-23 08:25:45 | INFO  | Live migration of beb352be-0526-487b-8f6c-e464af948285 (test) is still in progress 2025-09-23 08:25:47.521017 | orchestrator | 2025-09-23 08:25:47 | INFO  | Live migration of beb352be-0526-487b-8f6c-e464af948285 (test) is still in progress 2025-09-23 08:25:49.822861 | orchestrator | 2025-09-23 08:25:49 | INFO  | Live migration of beb352be-0526-487b-8f6c-e464af948285 (test) is still in progress 2025-09-23 08:25:52.242823 | orchestrator | 2025-09-23 08:25:52 | INFO  | Live migration of beb352be-0526-487b-8f6c-e464af948285 (test) is still in progress 2025-09-23 08:25:54.668294 | orchestrator | 2025-09-23 08:25:54 | INFO  | Live migration of beb352be-0526-487b-8f6c-e464af948285 (test) is still in progress 2025-09-23 08:25:56.917203 | orchestrator | 2025-09-23 08:25:56 | INFO  | Live migration of beb352be-0526-487b-8f6c-e464af948285 (test) completed with status ACTIVE 2025-09-23 08:25:57.279438 | orchestrator | + compute_list 2025-09-23 08:25:57.279533 | orchestrator | + osism manage compute list testbed-node-3 2025-09-23 08:26:00.061391 | orchestrator | +------+--------+----------+ 2025-09-23 08:26:00.061518 | orchestrator | | ID | Name | Status | 2025-09-23 08:26:00.061549 | orchestrator | |------+--------+----------| 2025-09-23 08:26:00.061569 | orchestrator | +------+--------+----------+ 2025-09-23 08:26:00.497596 | orchestrator | + osism manage compute list testbed-node-4 2025-09-23 08:26:03.692892 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-23 08:26:03.693055 | orchestrator | | ID | Name | Status | 2025-09-23 08:26:03.693072 | orchestrator | |--------------------------------------+--------+----------| 2025-09-23 08:26:03.693083 | orchestrator | | ebf8adb8-da1b-48d6-9bf0-ff8d599f4ba5 | test-4 | ACTIVE | 2025-09-23 08:26:03.693094 | orchestrator | | 0cbe7a91-cee3-4506-abfc-1980efebc2c7 | test-3 | ACTIVE | 2025-09-23 08:26:03.693106 | orchestrator | | 7e6c2c63-3aee-4150-bfdd-c9b61a55d37b | test-2 | ACTIVE | 2025-09-23 08:26:03.693117 | orchestrator | | 7f835501-63b6-45f9-9469-a0d04f94c049 | test-1 | ACTIVE | 2025-09-23 08:26:03.693127 | orchestrator | | beb352be-0526-487b-8f6c-e464af948285 | test | ACTIVE | 2025-09-23 08:26:03.693139 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-23 08:26:04.039511 | orchestrator | + osism manage compute list testbed-node-5 2025-09-23 08:26:06.872300 | orchestrator | +------+--------+----------+ 2025-09-23 08:26:06.872399 | orchestrator | | ID | Name | Status | 2025-09-23 08:26:06.872413 | orchestrator | |------+--------+----------| 2025-09-23 08:26:06.872425 | orchestrator | +------+--------+----------+ 2025-09-23 08:26:07.235106 | orchestrator | + server_ping 2025-09-23 08:26:07.236042 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-09-23 08:26:07.236344 | orchestrator | ++ tr -d '\r' 2025-09-23 08:26:10.412057 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-23 08:26:10.412183 | orchestrator | + ping -c3 192.168.112.138 2025-09-23 08:26:10.421286 | orchestrator | PING 192.168.112.138 (192.168.112.138) 56(84) bytes of data. 2025-09-23 08:26:10.421374 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=1 ttl=63 time=6.14 ms 2025-09-23 08:26:11.418702 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=2 ttl=63 time=2.08 ms 2025-09-23 08:26:12.420322 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=3 ttl=63 time=1.83 ms 2025-09-23 08:26:12.420397 | orchestrator | 2025-09-23 08:26:12.420405 | orchestrator | --- 192.168.112.138 ping statistics --- 2025-09-23 08:26:12.420412 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-23 08:26:12.420418 | orchestrator | rtt min/avg/max/mdev = 1.827/3.346/6.137/1.975 ms 2025-09-23 08:26:12.421015 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-23 08:26:12.421043 | orchestrator | + ping -c3 192.168.112.175 2025-09-23 08:26:12.433014 | orchestrator | PING 192.168.112.175 (192.168.112.175) 56(84) bytes of data. 2025-09-23 08:26:12.433049 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=1 ttl=63 time=7.74 ms 2025-09-23 08:26:13.428934 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=2 ttl=63 time=2.18 ms 2025-09-23 08:26:14.429051 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=3 ttl=63 time=1.68 ms 2025-09-23 08:26:14.429123 | orchestrator | 2025-09-23 08:26:14.429129 | orchestrator | --- 192.168.112.175 ping statistics --- 2025-09-23 08:26:14.429134 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-09-23 08:26:14.429139 | orchestrator | rtt min/avg/max/mdev = 1.677/3.865/7.741/2.747 ms 2025-09-23 08:26:14.429854 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-23 08:26:14.429881 | orchestrator | + ping -c3 192.168.112.129 2025-09-23 08:26:14.438543 | orchestrator | PING 192.168.112.129 (192.168.112.129) 56(84) bytes of data. 2025-09-23 08:26:14.438591 | orchestrator | 64 bytes from 192.168.112.129: icmp_seq=1 ttl=63 time=5.69 ms 2025-09-23 08:26:15.437033 | orchestrator | 64 bytes from 192.168.112.129: icmp_seq=2 ttl=63 time=2.32 ms 2025-09-23 08:26:16.438892 | orchestrator | 64 bytes from 192.168.112.129: icmp_seq=3 ttl=63 time=2.28 ms 2025-09-23 08:26:16.439066 | orchestrator | 2025-09-23 08:26:16.439087 | orchestrator | --- 192.168.112.129 ping statistics --- 2025-09-23 08:26:16.439100 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-23 08:26:16.439112 | orchestrator | rtt min/avg/max/mdev = 2.282/3.429/5.691/1.599 ms 2025-09-23 08:26:16.439124 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-23 08:26:16.439209 | orchestrator | + ping -c3 192.168.112.147 2025-09-23 08:26:16.455146 | orchestrator | PING 192.168.112.147 (192.168.112.147) 56(84) bytes of data. 2025-09-23 08:26:16.455218 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=1 ttl=63 time=11.3 ms 2025-09-23 08:26:17.448011 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=2 ttl=63 time=2.28 ms 2025-09-23 08:26:18.449856 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=3 ttl=63 time=2.01 ms 2025-09-23 08:26:18.450005 | orchestrator | 2025-09-23 08:26:18.450170 | orchestrator | --- 192.168.112.147 ping statistics --- 2025-09-23 08:26:18.450187 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-23 08:26:18.450199 | orchestrator | rtt min/avg/max/mdev = 2.011/5.195/11.292/4.312 ms 2025-09-23 08:26:18.450302 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-23 08:26:18.450319 | orchestrator | + ping -c3 192.168.112.130 2025-09-23 08:26:18.462201 | orchestrator | PING 192.168.112.130 (192.168.112.130) 56(84) bytes of data. 2025-09-23 08:26:18.462244 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=1 ttl=63 time=7.23 ms 2025-09-23 08:26:19.459101 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=2 ttl=63 time=2.63 ms 2025-09-23 08:26:20.460322 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=3 ttl=63 time=1.92 ms 2025-09-23 08:26:20.460422 | orchestrator | 2025-09-23 08:26:20.460437 | orchestrator | --- 192.168.112.130 ping statistics --- 2025-09-23 08:26:20.460450 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-23 08:26:20.460462 | orchestrator | rtt min/avg/max/mdev = 1.919/3.924/7.226/2.352 ms 2025-09-23 08:26:20.460992 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2025-09-23 08:26:23.833917 | orchestrator | 2025-09-23 08:26:23 | INFO  | Live migrating server ebf8adb8-da1b-48d6-9bf0-ff8d599f4ba5 2025-09-23 08:26:33.723504 | orchestrator | 2025-09-23 08:26:33 | INFO  | Live migration of ebf8adb8-da1b-48d6-9bf0-ff8d599f4ba5 (test-4) is still in progress 2025-09-23 08:26:36.077938 | orchestrator | 2025-09-23 08:26:36 | INFO  | Live migration of ebf8adb8-da1b-48d6-9bf0-ff8d599f4ba5 (test-4) is still in progress 2025-09-23 08:26:38.436140 | orchestrator | 2025-09-23 08:26:38 | INFO  | Live migration of ebf8adb8-da1b-48d6-9bf0-ff8d599f4ba5 (test-4) is still in progress 2025-09-23 08:26:40.757721 | orchestrator | 2025-09-23 08:26:40 | INFO  | Live migration of ebf8adb8-da1b-48d6-9bf0-ff8d599f4ba5 (test-4) is still in progress 2025-09-23 08:26:43.081656 | orchestrator | 2025-09-23 08:26:43 | INFO  | Live migration of ebf8adb8-da1b-48d6-9bf0-ff8d599f4ba5 (test-4) is still in progress 2025-09-23 08:26:45.337618 | orchestrator | 2025-09-23 08:26:45 | INFO  | Live migration of ebf8adb8-da1b-48d6-9bf0-ff8d599f4ba5 (test-4) is still in progress 2025-09-23 08:26:47.620462 | orchestrator | 2025-09-23 08:26:47 | INFO  | Live migration of ebf8adb8-da1b-48d6-9bf0-ff8d599f4ba5 (test-4) is still in progress 2025-09-23 08:26:49.886531 | orchestrator | 2025-09-23 08:26:49 | INFO  | Live migration of ebf8adb8-da1b-48d6-9bf0-ff8d599f4ba5 (test-4) is still in progress 2025-09-23 08:26:52.184189 | orchestrator | 2025-09-23 08:26:52 | INFO  | Live migration of ebf8adb8-da1b-48d6-9bf0-ff8d599f4ba5 (test-4) completed with status ACTIVE 2025-09-23 08:26:52.184287 | orchestrator | 2025-09-23 08:26:52 | INFO  | Live migrating server 0cbe7a91-cee3-4506-abfc-1980efebc2c7 2025-09-23 08:27:02.917939 | orchestrator | 2025-09-23 08:27:02 | INFO  | Live migration of 0cbe7a91-cee3-4506-abfc-1980efebc2c7 (test-3) is still in progress 2025-09-23 08:27:05.300994 | orchestrator | 2025-09-23 08:27:05 | INFO  | Live migration of 0cbe7a91-cee3-4506-abfc-1980efebc2c7 (test-3) is still in progress 2025-09-23 08:27:07.644220 | orchestrator | 2025-09-23 08:27:07 | INFO  | Live migration of 0cbe7a91-cee3-4506-abfc-1980efebc2c7 (test-3) is still in progress 2025-09-23 08:27:09.947515 | orchestrator | 2025-09-23 08:27:09 | INFO  | Live migration of 0cbe7a91-cee3-4506-abfc-1980efebc2c7 (test-3) is still in progress 2025-09-23 08:27:12.301824 | orchestrator | 2025-09-23 08:27:12 | INFO  | Live migration of 0cbe7a91-cee3-4506-abfc-1980efebc2c7 (test-3) is still in progress 2025-09-23 08:27:14.589745 | orchestrator | 2025-09-23 08:27:14 | INFO  | Live migration of 0cbe7a91-cee3-4506-abfc-1980efebc2c7 (test-3) is still in progress 2025-09-23 08:27:16.919281 | orchestrator | 2025-09-23 08:27:16 | INFO  | Live migration of 0cbe7a91-cee3-4506-abfc-1980efebc2c7 (test-3) is still in progress 2025-09-23 08:27:19.194176 | orchestrator | 2025-09-23 08:27:19 | INFO  | Live migration of 0cbe7a91-cee3-4506-abfc-1980efebc2c7 (test-3) is still in progress 2025-09-23 08:27:21.508205 | orchestrator | 2025-09-23 08:27:21 | INFO  | Live migration of 0cbe7a91-cee3-4506-abfc-1980efebc2c7 (test-3) completed with status ACTIVE 2025-09-23 08:27:21.508338 | orchestrator | 2025-09-23 08:27:21 | INFO  | Live migrating server 7e6c2c63-3aee-4150-bfdd-c9b61a55d37b 2025-09-23 08:27:31.289839 | orchestrator | 2025-09-23 08:27:31 | INFO  | Live migration of 7e6c2c63-3aee-4150-bfdd-c9b61a55d37b (test-2) is still in progress 2025-09-23 08:27:33.589494 | orchestrator | 2025-09-23 08:27:33 | INFO  | Live migration of 7e6c2c63-3aee-4150-bfdd-c9b61a55d37b (test-2) is still in progress 2025-09-23 08:27:35.959786 | orchestrator | 2025-09-23 08:27:35 | INFO  | Live migration of 7e6c2c63-3aee-4150-bfdd-c9b61a55d37b (test-2) is still in progress 2025-09-23 08:27:38.234127 | orchestrator | 2025-09-23 08:27:38 | INFO  | Live migration of 7e6c2c63-3aee-4150-bfdd-c9b61a55d37b (test-2) is still in progress 2025-09-23 08:27:40.501776 | orchestrator | 2025-09-23 08:27:40 | INFO  | Live migration of 7e6c2c63-3aee-4150-bfdd-c9b61a55d37b (test-2) is still in progress 2025-09-23 08:27:42.843546 | orchestrator | 2025-09-23 08:27:42 | INFO  | Live migration of 7e6c2c63-3aee-4150-bfdd-c9b61a55d37b (test-2) is still in progress 2025-09-23 08:27:45.114811 | orchestrator | 2025-09-23 08:27:45 | INFO  | Live migration of 7e6c2c63-3aee-4150-bfdd-c9b61a55d37b (test-2) is still in progress 2025-09-23 08:27:47.399831 | orchestrator | 2025-09-23 08:27:47 | INFO  | Live migration of 7e6c2c63-3aee-4150-bfdd-c9b61a55d37b (test-2) is still in progress 2025-09-23 08:27:49.728153 | orchestrator | 2025-09-23 08:27:49 | INFO  | Live migration of 7e6c2c63-3aee-4150-bfdd-c9b61a55d37b (test-2) completed with status ACTIVE 2025-09-23 08:27:49.728259 | orchestrator | 2025-09-23 08:27:49 | INFO  | Live migrating server 7f835501-63b6-45f9-9469-a0d04f94c049 2025-09-23 08:27:59.401589 | orchestrator | 2025-09-23 08:27:59 | INFO  | Live migration of 7f835501-63b6-45f9-9469-a0d04f94c049 (test-1) is still in progress 2025-09-23 08:28:01.762074 | orchestrator | 2025-09-23 08:28:01 | INFO  | Live migration of 7f835501-63b6-45f9-9469-a0d04f94c049 (test-1) is still in progress 2025-09-23 08:28:04.107405 | orchestrator | 2025-09-23 08:28:04 | INFO  | Live migration of 7f835501-63b6-45f9-9469-a0d04f94c049 (test-1) is still in progress 2025-09-23 08:28:06.377369 | orchestrator | 2025-09-23 08:28:06 | INFO  | Live migration of 7f835501-63b6-45f9-9469-a0d04f94c049 (test-1) is still in progress 2025-09-23 08:28:08.723703 | orchestrator | 2025-09-23 08:28:08 | INFO  | Live migration of 7f835501-63b6-45f9-9469-a0d04f94c049 (test-1) is still in progress 2025-09-23 08:28:10.993525 | orchestrator | 2025-09-23 08:28:10 | INFO  | Live migration of 7f835501-63b6-45f9-9469-a0d04f94c049 (test-1) is still in progress 2025-09-23 08:28:13.338762 | orchestrator | 2025-09-23 08:28:13 | INFO  | Live migration of 7f835501-63b6-45f9-9469-a0d04f94c049 (test-1) is still in progress 2025-09-23 08:28:15.632821 | orchestrator | 2025-09-23 08:28:15 | INFO  | Live migration of 7f835501-63b6-45f9-9469-a0d04f94c049 (test-1) is still in progress 2025-09-23 08:28:17.954525 | orchestrator | 2025-09-23 08:28:17 | INFO  | Live migration of 7f835501-63b6-45f9-9469-a0d04f94c049 (test-1) completed with status ACTIVE 2025-09-23 08:28:17.954641 | orchestrator | 2025-09-23 08:28:17 | INFO  | Live migrating server beb352be-0526-487b-8f6c-e464af948285 2025-09-23 08:28:28.222457 | orchestrator | 2025-09-23 08:28:28 | INFO  | Live migration of beb352be-0526-487b-8f6c-e464af948285 (test) is still in progress 2025-09-23 08:28:30.571836 | orchestrator | 2025-09-23 08:28:30 | INFO  | Live migration of beb352be-0526-487b-8f6c-e464af948285 (test) is still in progress 2025-09-23 08:28:32.946222 | orchestrator | 2025-09-23 08:28:32 | INFO  | Live migration of beb352be-0526-487b-8f6c-e464af948285 (test) is still in progress 2025-09-23 08:28:35.346758 | orchestrator | 2025-09-23 08:28:35 | INFO  | Live migration of beb352be-0526-487b-8f6c-e464af948285 (test) is still in progress 2025-09-23 08:28:37.699327 | orchestrator | 2025-09-23 08:28:37 | INFO  | Live migration of beb352be-0526-487b-8f6c-e464af948285 (test) is still in progress 2025-09-23 08:28:40.092803 | orchestrator | 2025-09-23 08:28:40 | INFO  | Live migration of beb352be-0526-487b-8f6c-e464af948285 (test) is still in progress 2025-09-23 08:28:42.412164 | orchestrator | 2025-09-23 08:28:42 | INFO  | Live migration of beb352be-0526-487b-8f6c-e464af948285 (test) is still in progress 2025-09-23 08:28:44.698725 | orchestrator | 2025-09-23 08:28:44 | INFO  | Live migration of beb352be-0526-487b-8f6c-e464af948285 (test) is still in progress 2025-09-23 08:28:46.959508 | orchestrator | 2025-09-23 08:28:46 | INFO  | Live migration of beb352be-0526-487b-8f6c-e464af948285 (test) is still in progress 2025-09-23 08:28:49.279219 | orchestrator | 2025-09-23 08:28:49 | INFO  | Live migration of beb352be-0526-487b-8f6c-e464af948285 (test) is still in progress 2025-09-23 08:28:51.589581 | orchestrator | 2025-09-23 08:28:51 | INFO  | Live migration of beb352be-0526-487b-8f6c-e464af948285 (test) is still in progress 2025-09-23 08:28:54.006909 | orchestrator | 2025-09-23 08:28:54 | INFO  | Live migration of beb352be-0526-487b-8f6c-e464af948285 (test) completed with status ACTIVE 2025-09-23 08:28:54.409782 | orchestrator | + compute_list 2025-09-23 08:28:54.409880 | orchestrator | + osism manage compute list testbed-node-3 2025-09-23 08:28:57.269330 | orchestrator | +------+--------+----------+ 2025-09-23 08:28:57.269435 | orchestrator | | ID | Name | Status | 2025-09-23 08:28:57.269450 | orchestrator | |------+--------+----------| 2025-09-23 08:28:57.269462 | orchestrator | +------+--------+----------+ 2025-09-23 08:28:57.637708 | orchestrator | + osism manage compute list testbed-node-4 2025-09-23 08:29:00.490972 | orchestrator | +------+--------+----------+ 2025-09-23 08:29:00.491095 | orchestrator | | ID | Name | Status | 2025-09-23 08:29:00.491111 | orchestrator | |------+--------+----------| 2025-09-23 08:29:00.491123 | orchestrator | +------+--------+----------+ 2025-09-23 08:29:00.833619 | orchestrator | + osism manage compute list testbed-node-5 2025-09-23 08:29:03.967644 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-23 08:29:03.967758 | orchestrator | | ID | Name | Status | 2025-09-23 08:29:03.967773 | orchestrator | |--------------------------------------+--------+----------| 2025-09-23 08:29:03.967785 | orchestrator | | ebf8adb8-da1b-48d6-9bf0-ff8d599f4ba5 | test-4 | ACTIVE | 2025-09-23 08:29:03.967796 | orchestrator | | 0cbe7a91-cee3-4506-abfc-1980efebc2c7 | test-3 | ACTIVE | 2025-09-23 08:29:03.967807 | orchestrator | | 7e6c2c63-3aee-4150-bfdd-c9b61a55d37b | test-2 | ACTIVE | 2025-09-23 08:29:03.967818 | orchestrator | | 7f835501-63b6-45f9-9469-a0d04f94c049 | test-1 | ACTIVE | 2025-09-23 08:29:03.967829 | orchestrator | | beb352be-0526-487b-8f6c-e464af948285 | test | ACTIVE | 2025-09-23 08:29:03.967840 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-23 08:29:04.336999 | orchestrator | + server_ping 2025-09-23 08:29:04.338262 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-09-23 08:29:04.338399 | orchestrator | ++ tr -d '\r' 2025-09-23 08:29:07.337919 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-23 08:29:07.338122 | orchestrator | + ping -c3 192.168.112.138 2025-09-23 08:29:07.348360 | orchestrator | PING 192.168.112.138 (192.168.112.138) 56(84) bytes of data. 2025-09-23 08:29:07.348439 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=1 ttl=63 time=6.64 ms 2025-09-23 08:29:08.346412 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=2 ttl=63 time=2.53 ms 2025-09-23 08:29:09.347810 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=3 ttl=63 time=2.03 ms 2025-09-23 08:29:09.347918 | orchestrator | 2025-09-23 08:29:09.348065 | orchestrator | --- 192.168.112.138 ping statistics --- 2025-09-23 08:29:09.348092 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-09-23 08:29:09.348111 | orchestrator | rtt min/avg/max/mdev = 2.034/3.733/6.635/2.061 ms 2025-09-23 08:29:09.348338 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-23 08:29:09.348363 | orchestrator | + ping -c3 192.168.112.175 2025-09-23 08:29:09.361442 | orchestrator | PING 192.168.112.175 (192.168.112.175) 56(84) bytes of data. 2025-09-23 08:29:09.361523 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=1 ttl=63 time=7.68 ms 2025-09-23 08:29:10.358211 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=2 ttl=63 time=1.95 ms 2025-09-23 08:29:11.359672 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=3 ttl=63 time=1.90 ms 2025-09-23 08:29:11.359775 | orchestrator | 2025-09-23 08:29:11.359793 | orchestrator | --- 192.168.112.175 ping statistics --- 2025-09-23 08:29:11.359806 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-23 08:29:11.359818 | orchestrator | rtt min/avg/max/mdev = 1.897/3.842/7.678/2.712 ms 2025-09-23 08:29:11.359866 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-23 08:29:11.359880 | orchestrator | + ping -c3 192.168.112.129 2025-09-23 08:29:11.372218 | orchestrator | PING 192.168.112.129 (192.168.112.129) 56(84) bytes of data. 2025-09-23 08:29:11.372288 | orchestrator | 64 bytes from 192.168.112.129: icmp_seq=1 ttl=63 time=7.35 ms 2025-09-23 08:29:12.368807 | orchestrator | 64 bytes from 192.168.112.129: icmp_seq=2 ttl=63 time=2.32 ms 2025-09-23 08:29:13.370349 | orchestrator | 64 bytes from 192.168.112.129: icmp_seq=3 ttl=63 time=1.54 ms 2025-09-23 08:29:13.370467 | orchestrator | 2025-09-23 08:29:13.370556 | orchestrator | --- 192.168.112.129 ping statistics --- 2025-09-23 08:29:13.370620 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-23 08:29:13.370635 | orchestrator | rtt min/avg/max/mdev = 1.538/3.734/7.349/2.575 ms 2025-09-23 08:29:13.370740 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-23 08:29:13.370755 | orchestrator | + ping -c3 192.168.112.147 2025-09-23 08:29:13.381684 | orchestrator | PING 192.168.112.147 (192.168.112.147) 56(84) bytes of data. 2025-09-23 08:29:13.381840 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=1 ttl=63 time=6.54 ms 2025-09-23 08:29:14.379715 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=2 ttl=63 time=2.39 ms 2025-09-23 08:29:15.381532 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=3 ttl=63 time=2.00 ms 2025-09-23 08:29:15.381636 | orchestrator | 2025-09-23 08:29:15.381653 | orchestrator | --- 192.168.112.147 ping statistics --- 2025-09-23 08:29:15.381667 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-09-23 08:29:15.381679 | orchestrator | rtt min/avg/max/mdev = 1.997/3.644/6.542/2.055 ms 2025-09-23 08:29:15.382137 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-23 08:29:15.382198 | orchestrator | + ping -c3 192.168.112.130 2025-09-23 08:29:15.393868 | orchestrator | PING 192.168.112.130 (192.168.112.130) 56(84) bytes of data. 2025-09-23 08:29:15.393974 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=1 ttl=63 time=6.76 ms 2025-09-23 08:29:16.391516 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=2 ttl=63 time=2.31 ms 2025-09-23 08:29:17.392987 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=3 ttl=63 time=1.93 ms 2025-09-23 08:29:17.393085 | orchestrator | 2025-09-23 08:29:17.393101 | orchestrator | --- 192.168.112.130 ping statistics --- 2025-09-23 08:29:17.393113 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-23 08:29:17.393124 | orchestrator | rtt min/avg/max/mdev = 1.926/3.665/6.762/2.195 ms 2025-09-23 08:29:17.711313 | orchestrator | ok: Runtime: 0:21:54.924565 2025-09-23 08:29:17.748983 | 2025-09-23 08:29:17.749102 | TASK [Run tempest] 2025-09-23 08:29:18.284104 | orchestrator | skipping: Conditional result was False 2025-09-23 08:29:18.298470 | 2025-09-23 08:29:18.298658 | TASK [Check prometheus alert status] 2025-09-23 08:29:18.835756 | orchestrator | skipping: Conditional result was False 2025-09-23 08:29:18.838765 | 2025-09-23 08:29:18.839007 | PLAY RECAP 2025-09-23 08:29:18.839162 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-09-23 08:29:18.839234 | 2025-09-23 08:29:19.073456 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-09-23 08:29:19.075884 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-23 08:29:19.842605 | 2025-09-23 08:29:19.842862 | PLAY [Post output play] 2025-09-23 08:29:19.860114 | 2025-09-23 08:29:19.860251 | LOOP [stage-output : Register sources] 2025-09-23 08:29:19.927777 | 2025-09-23 08:29:19.928119 | TASK [stage-output : Check sudo] 2025-09-23 08:29:20.761873 | orchestrator | sudo: a password is required 2025-09-23 08:29:20.966133 | orchestrator | ok: Runtime: 0:00:00.010221 2025-09-23 08:29:20.973319 | 2025-09-23 08:29:20.973445 | LOOP [stage-output : Set source and destination for files and folders] 2025-09-23 08:29:21.009336 | 2025-09-23 08:29:21.009619 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-09-23 08:29:21.091180 | orchestrator | ok 2025-09-23 08:29:21.097284 | 2025-09-23 08:29:21.097399 | LOOP [stage-output : Ensure target folders exist] 2025-09-23 08:29:21.530822 | orchestrator | ok: "docs" 2025-09-23 08:29:21.531128 | 2025-09-23 08:29:21.770107 | orchestrator | ok: "artifacts" 2025-09-23 08:29:22.007694 | orchestrator | ok: "logs" 2025-09-23 08:29:22.020703 | 2025-09-23 08:29:22.020832 | LOOP [stage-output : Copy files and folders to staging folder] 2025-09-23 08:29:22.050983 | 2025-09-23 08:29:22.051170 | TASK [stage-output : Make all log files readable] 2025-09-23 08:29:22.368136 | orchestrator | ok 2025-09-23 08:29:22.374821 | 2025-09-23 08:29:22.374996 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-09-23 08:29:22.408906 | orchestrator | skipping: Conditional result was False 2025-09-23 08:29:22.421841 | 2025-09-23 08:29:22.421978 | TASK [stage-output : Discover log files for compression] 2025-09-23 08:29:22.446240 | orchestrator | skipping: Conditional result was False 2025-09-23 08:29:22.464630 | 2025-09-23 08:29:22.464768 | LOOP [stage-output : Archive everything from logs] 2025-09-23 08:29:22.509995 | 2025-09-23 08:29:22.510173 | PLAY [Post cleanup play] 2025-09-23 08:29:22.519137 | 2025-09-23 08:29:22.519247 | TASK [Set cloud fact (Zuul deployment)] 2025-09-23 08:29:22.592023 | orchestrator | ok 2025-09-23 08:29:22.602301 | 2025-09-23 08:29:22.602413 | TASK [Set cloud fact (local deployment)] 2025-09-23 08:29:22.636434 | orchestrator | skipping: Conditional result was False 2025-09-23 08:29:22.651615 | 2025-09-23 08:29:22.651753 | TASK [Clean the cloud environment] 2025-09-23 08:29:26.541505 | orchestrator | 2025-09-23 08:29:26 - clean up servers 2025-09-23 08:29:27.280937 | orchestrator | 2025-09-23 08:29:27 - testbed-manager 2025-09-23 08:29:27.362604 | orchestrator | 2025-09-23 08:29:27 - testbed-node-5 2025-09-23 08:29:27.453209 | orchestrator | 2025-09-23 08:29:27 - testbed-node-0 2025-09-23 08:29:27.532867 | orchestrator | 2025-09-23 08:29:27 - testbed-node-2 2025-09-23 08:29:27.623719 | orchestrator | 2025-09-23 08:29:27 - testbed-node-4 2025-09-23 08:29:27.724452 | orchestrator | 2025-09-23 08:29:27 - testbed-node-3 2025-09-23 08:29:27.838254 | orchestrator | 2025-09-23 08:29:27 - testbed-node-1 2025-09-23 08:29:27.922386 | orchestrator | 2025-09-23 08:29:27 - clean up keypairs 2025-09-23 08:29:27.940266 | orchestrator | 2025-09-23 08:29:27 - testbed 2025-09-23 08:29:27.965798 | orchestrator | 2025-09-23 08:29:27 - wait for servers to be gone 2025-09-23 08:29:38.776579 | orchestrator | 2025-09-23 08:29:38 - clean up ports 2025-09-23 08:29:38.954445 | orchestrator | 2025-09-23 08:29:38 - 1b890fd6-14a0-408a-8b8e-cb0d9813c74f 2025-09-23 08:29:39.245705 | orchestrator | 2025-09-23 08:29:39 - 3fac347f-f2d3-44b9-9e1e-357a2d3ac792 2025-09-23 08:29:39.498754 | orchestrator | 2025-09-23 08:29:39 - 41a236f7-f9fd-4a48-b22c-fbea42bab555 2025-09-23 08:29:39.901506 | orchestrator | 2025-09-23 08:29:39 - 4cfb2ae8-3669-492b-ae66-e0bc24849f96 2025-09-23 08:29:40.146929 | orchestrator | 2025-09-23 08:29:40 - a35f0cf2-033b-47cd-8766-95bf4b3d8946 2025-09-23 08:29:40.368100 | orchestrator | 2025-09-23 08:29:40 - bbdf43a3-9f85-477d-b40e-607bc7144676 2025-09-23 08:29:40.579403 | orchestrator | 2025-09-23 08:29:40 - e430c879-ccb1-4a2f-849d-c7bffaf9be82 2025-09-23 08:29:40.787948 | orchestrator | 2025-09-23 08:29:40 - clean up volumes 2025-09-23 08:29:40.930176 | orchestrator | 2025-09-23 08:29:40 - testbed-volume-2-node-base 2025-09-23 08:29:40.969924 | orchestrator | 2025-09-23 08:29:40 - testbed-volume-0-node-base 2025-09-23 08:29:41.009842 | orchestrator | 2025-09-23 08:29:41 - testbed-volume-5-node-base 2025-09-23 08:29:41.050212 | orchestrator | 2025-09-23 08:29:41 - testbed-volume-3-node-base 2025-09-23 08:29:41.091696 | orchestrator | 2025-09-23 08:29:41 - testbed-volume-manager-base 2025-09-23 08:29:41.137276 | orchestrator | 2025-09-23 08:29:41 - testbed-volume-4-node-base 2025-09-23 08:29:41.181084 | orchestrator | 2025-09-23 08:29:41 - testbed-volume-1-node-base 2025-09-23 08:29:41.221369 | orchestrator | 2025-09-23 08:29:41 - testbed-volume-1-node-4 2025-09-23 08:29:41.267363 | orchestrator | 2025-09-23 08:29:41 - testbed-volume-0-node-3 2025-09-23 08:29:41.310726 | orchestrator | 2025-09-23 08:29:41 - testbed-volume-2-node-5 2025-09-23 08:29:41.354834 | orchestrator | 2025-09-23 08:29:41 - testbed-volume-8-node-5 2025-09-23 08:29:41.397051 | orchestrator | 2025-09-23 08:29:41 - testbed-volume-4-node-4 2025-09-23 08:29:41.436789 | orchestrator | 2025-09-23 08:29:41 - testbed-volume-6-node-3 2025-09-23 08:29:41.480110 | orchestrator | 2025-09-23 08:29:41 - testbed-volume-5-node-5 2025-09-23 08:29:41.520094 | orchestrator | 2025-09-23 08:29:41 - testbed-volume-3-node-3 2025-09-23 08:29:41.561860 | orchestrator | 2025-09-23 08:29:41 - testbed-volume-7-node-4 2025-09-23 08:29:41.604331 | orchestrator | 2025-09-23 08:29:41 - disconnect routers 2025-09-23 08:29:41.785408 | orchestrator | 2025-09-23 08:29:41 - testbed 2025-09-23 08:29:42.674656 | orchestrator | 2025-09-23 08:29:42 - clean up subnets 2025-09-23 08:29:42.726731 | orchestrator | 2025-09-23 08:29:42 - subnet-testbed-management 2025-09-23 08:29:42.894905 | orchestrator | 2025-09-23 08:29:42 - clean up networks 2025-09-23 08:29:43.032280 | orchestrator | 2025-09-23 08:29:43 - net-testbed-management 2025-09-23 08:29:43.337936 | orchestrator | 2025-09-23 08:29:43 - clean up security groups 2025-09-23 08:29:43.393945 | orchestrator | 2025-09-23 08:29:43 - testbed-node 2025-09-23 08:29:43.522466 | orchestrator | 2025-09-23 08:29:43 - testbed-management 2025-09-23 08:29:43.649273 | orchestrator | 2025-09-23 08:29:43 - clean up floating ips 2025-09-23 08:29:43.681227 | orchestrator | 2025-09-23 08:29:43 - 81.163.192.228 2025-09-23 08:29:44.087678 | orchestrator | 2025-09-23 08:29:44 - clean up routers 2025-09-23 08:29:44.155042 | orchestrator | 2025-09-23 08:29:44 - testbed 2025-09-23 08:29:45.219924 | orchestrator | ok: Runtime: 0:00:22.040566 2025-09-23 08:29:45.224007 | 2025-09-23 08:29:45.224151 | PLAY RECAP 2025-09-23 08:29:45.224259 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-09-23 08:29:45.224314 | 2025-09-23 08:29:45.360574 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-23 08:29:45.362728 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-23 08:29:46.127165 | 2025-09-23 08:29:46.127323 | PLAY [Cleanup play] 2025-09-23 08:29:46.143813 | 2025-09-23 08:29:46.143942 | TASK [Set cloud fact (Zuul deployment)] 2025-09-23 08:29:46.201694 | orchestrator | ok 2025-09-23 08:29:46.210511 | 2025-09-23 08:29:46.210661 | TASK [Set cloud fact (local deployment)] 2025-09-23 08:29:46.244448 | orchestrator | skipping: Conditional result was False 2025-09-23 08:29:46.254750 | 2025-09-23 08:29:46.254885 | TASK [Clean the cloud environment] 2025-09-23 08:29:47.391096 | orchestrator | 2025-09-23 08:29:47 - clean up servers 2025-09-23 08:29:47.868277 | orchestrator | 2025-09-23 08:29:47 - clean up keypairs 2025-09-23 08:29:47.883177 | orchestrator | 2025-09-23 08:29:47 - wait for servers to be gone 2025-09-23 08:29:47.923233 | orchestrator | 2025-09-23 08:29:47 - clean up ports 2025-09-23 08:29:48.001905 | orchestrator | 2025-09-23 08:29:48 - clean up volumes 2025-09-23 08:29:48.100960 | orchestrator | 2025-09-23 08:29:48 - disconnect routers 2025-09-23 08:29:48.126148 | orchestrator | 2025-09-23 08:29:48 - clean up subnets 2025-09-23 08:29:48.145759 | orchestrator | 2025-09-23 08:29:48 - clean up networks 2025-09-23 08:29:48.284286 | orchestrator | 2025-09-23 08:29:48 - clean up security groups 2025-09-23 08:29:48.325465 | orchestrator | 2025-09-23 08:29:48 - clean up floating ips 2025-09-23 08:29:48.351115 | orchestrator | 2025-09-23 08:29:48 - clean up routers 2025-09-23 08:29:48.791836 | orchestrator | ok: Runtime: 0:00:01.366841 2025-09-23 08:29:48.795761 | 2025-09-23 08:29:48.795922 | PLAY RECAP 2025-09-23 08:29:48.796043 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-09-23 08:29:48.796106 | 2025-09-23 08:29:48.922494 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-23 08:29:48.925184 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-23 08:29:49.686920 | 2025-09-23 08:29:49.687078 | PLAY [Base post-fetch] 2025-09-23 08:29:49.703222 | 2025-09-23 08:29:49.703352 | TASK [fetch-output : Set log path for multiple nodes] 2025-09-23 08:29:49.758483 | orchestrator | skipping: Conditional result was False 2025-09-23 08:29:49.773250 | 2025-09-23 08:29:49.773442 | TASK [fetch-output : Set log path for single node] 2025-09-23 08:29:49.810578 | orchestrator | ok 2025-09-23 08:29:49.818382 | 2025-09-23 08:29:49.818507 | LOOP [fetch-output : Ensure local output dirs] 2025-09-23 08:29:50.284369 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/cb22b4db87e44be8827bfb43641a1067/work/logs" 2025-09-23 08:29:50.561386 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/cb22b4db87e44be8827bfb43641a1067/work/artifacts" 2025-09-23 08:29:50.824334 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/cb22b4db87e44be8827bfb43641a1067/work/docs" 2025-09-23 08:29:50.838107 | 2025-09-23 08:29:50.838334 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-09-23 08:29:51.804120 | orchestrator | changed: .d..t...... ./ 2025-09-23 08:29:51.804454 | orchestrator | changed: All items complete 2025-09-23 08:29:51.804514 | 2025-09-23 08:29:52.527493 | orchestrator | changed: .d..t...... ./ 2025-09-23 08:29:53.234266 | orchestrator | changed: .d..t...... ./ 2025-09-23 08:29:53.255871 | 2025-09-23 08:29:53.255992 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-09-23 08:29:53.291078 | orchestrator | skipping: Conditional result was False 2025-09-23 08:29:53.297730 | orchestrator | skipping: Conditional result was False 2025-09-23 08:29:53.311198 | 2025-09-23 08:29:53.311285 | PLAY RECAP 2025-09-23 08:29:53.311345 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-09-23 08:29:53.311375 | 2025-09-23 08:29:53.429208 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-23 08:29:53.430145 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-23 08:29:54.149283 | 2025-09-23 08:29:54.149455 | PLAY [Base post] 2025-09-23 08:29:54.163830 | 2025-09-23 08:29:54.163953 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-09-23 08:29:55.143450 | orchestrator | changed 2025-09-23 08:29:55.153564 | 2025-09-23 08:29:55.153684 | PLAY RECAP 2025-09-23 08:29:55.153759 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-09-23 08:29:55.153832 | 2025-09-23 08:29:55.265830 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-23 08:29:55.266796 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-09-23 08:29:56.141445 | 2025-09-23 08:29:56.141644 | PLAY [Base post-logs] 2025-09-23 08:29:56.151754 | 2025-09-23 08:29:56.151884 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-09-23 08:29:56.636616 | localhost | changed 2025-09-23 08:29:56.651905 | 2025-09-23 08:29:56.652064 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-09-23 08:29:56.690441 | localhost | ok 2025-09-23 08:29:56.696644 | 2025-09-23 08:29:56.696794 | TASK [Set zuul-log-path fact] 2025-09-23 08:29:56.713950 | localhost | ok 2025-09-23 08:29:56.728671 | 2025-09-23 08:29:56.728807 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-23 08:29:56.765921 | localhost | ok 2025-09-23 08:29:56.771868 | 2025-09-23 08:29:56.772025 | TASK [upload-logs : Create log directories] 2025-09-23 08:29:57.277091 | localhost | changed 2025-09-23 08:29:57.280066 | 2025-09-23 08:29:57.280173 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-09-23 08:29:57.771577 | localhost -> localhost | ok: Runtime: 0:00:00.007325 2025-09-23 08:29:57.780935 | 2025-09-23 08:29:57.781146 | TASK [upload-logs : Upload logs to log server] 2025-09-23 08:29:58.367761 | localhost | Output suppressed because no_log was given 2025-09-23 08:29:58.370950 | 2025-09-23 08:29:58.371121 | LOOP [upload-logs : Compress console log and json output] 2025-09-23 08:29:58.433757 | localhost | skipping: Conditional result was False 2025-09-23 08:29:58.437638 | localhost | skipping: Conditional result was False 2025-09-23 08:29:58.441877 | 2025-09-23 08:29:58.441995 | LOOP [upload-logs : Upload compressed console log and json output] 2025-09-23 08:29:58.492992 | localhost | skipping: Conditional result was False 2025-09-23 08:29:58.493588 | 2025-09-23 08:29:58.500186 | localhost | skipping: Conditional result was False 2025-09-23 08:29:58.508654 | 2025-09-23 08:29:58.508779 | LOOP [upload-logs : Upload console log and json output]